1 Introduction

Collaboration between humans and robots is becoming more feasible thanks to advancements in robotics. In accordance with the vision of a cyber-society, autonomous robots are being used to assist with different activities in close contact with people in contexts ranging from workplaces (e.g. robots assembling automobiles in the manufacturing sector) to people’s daily home lives (e.g. a service robot helping in the kitchen or feeding a patient in need of care). Although the technology is not yet sufficiently mature to be implemented widely, examples such as the ones above are becoming more common. This requires a better understanding of robots’ impact on human beings. We are particularly concerned that some people might have difficulty accepting such a cooperative relationship with robots. One way to study the social acceptance of such robots is to focus on attitudes toward them [53, 55]. Currently, there are negative attitudes in popular culture toward autonomous robots [20, 34, 69]. Robot abuse and antagonism toward robots can also be considered indicative of negative attitudes toward them. Given that the acceptance of robots in everyday life depends not only on technical but also on social and psychological aspects of human-robot interactions [35], it is necessary to first investigate what causes these negative attitudes and then counteract mechanisms to improve them.

When interacting with novel entities such as autonomous robots, people tend to use existing social schemas to make sense of the situation. Perceiving a mind in robots is an example of this projection, or tendency to attribute human-like characteristics, motivations and intentions to non-human entities [64]. Mind perception has two dimensions: experience (the ability to feel emotions such as pleasure) and agency (the ability to act in ways such as self-control) [25, 26]. Although primary work on uncanny feelings demonstrated that perceptions of experience of robot generate uncanny feelings [26], other recent studies (e.g. [3, 69]) have shown that the robots’ ability to act (i.e. perception of agency) also increases uncanny feelings. As Wallach and Allen state: “within the next few years we predict there will be a catastrophic incident brought by a computer system making a decision independent of human oversight” [62, p. 4]. Overall, these views suggest that robots’ improved capability to autonomously act or make decisions would induce negative attitudes toward robots. Nevertheless, despite the negative press given to autonomous robots, research demonstrating a link between agency and negative attitudes is inconsistent. The current literature offers contradictory findings about the effects of agency attributed to robots. While several studies reported positive attitudes toward high agency [48, 66], others found negative attitudes and outcomes [29, 53]. As a result, we have little understanding about the underlying mechanisms through which attributed agency to robots influences peoples’ behavior and attitudes toward robots.

In this study, we address this gap by investigating the effect of perceived control. We expect that perceived control moderates the impact of attributed agency and may help to explain some of the heterogeneity in reactions and attitudes toward robots. Perceived control is relevant to understanding the consequences of the increased attribution of agency to robots for three reasons. First, the High-Level Expert Group has identified human oversight as a key requirement in their guidelines on minimizing the negative effects of AI systems [16]. However, few studies have investigated the role of human oversight and perception of control in collaboration with agentic robots in any systematic way. Second, studies on social robot acceptance reveal that characteristics of both robots and humans influence attitudes toward robots. For instance, physical embodiment, consistent substrate and a match between a robot’s appearance and behavior foster users’ acceptance of the robot [22, 23, 63]. Other research indicates that men and younger people tend to have more positive attitudes toward robots [27, 35, 59, 60] than women and older people. While these demographic data may predict non-acceptance, they are not able to explain negative reactions and non-acceptance [53]. In this respect, other external factors such as perceived control could explain the existence of situations in which a high attribution of agency to robots does not imply negative attitudes and non-acceptance. Third, in the human-computer interactions (HCI) context, perceived control is defined as “the perception that one’s behavior significantly alters outcomes by producing desired and preventing undesired events” [30, p. 4]. Given the diverse effects of control on human cognition, attitudes, and behavior, it should not be surprising that emerging technology undermining humans’ control may be unpleasant. More specifically, such a relationship may be theoretically supported by the concept of reactance, the tendency to react negatively toward threats to one’s behavioral freedom, whether in the form of eliminating or limiting it. According to reactance theory [8], people become aroused as a result of loss of control over the situation, which may lead them to feel aggressively toward the entity that is attempting to restrict their freedom. The present study examines whether attitudes toward agentic robots depend on the control humans feel during the collaboration. Thus, this study’s contributions to the human–robot interaction (HRI) literature are twofold. Firstly, a relatively new theoretical conceptualization of control, namely perceived control as a situational appraisal, is used. Secondly, in this study, we test for moderation in the relationship between attributed agency and attitudes toward robots.

We specifically focus on collaboration with autonomous robots as an example of interacting with agentic robots. Agency is a characteristic that refers to perceived autonomy in robots’ behavior [45]. In a collaborative context, a robot that demonstrates proactive behaviors seems to have more agency compared to situations in which the robot only responds to orders (i.e. reactive behavior). We understand agency as the capacity to perform a goal-oriented task to an extent autonomously on the environment [68]. Rose and Turex describe machine agency as the extent to which machines are perceived by humans as having autonomy [45]. Accordingly, there is an undeniable relationship between autonomy and agency: as the autonomy level of a robot increases, greater agency is attributed to the robot. Thus, collaboration with a robot with a higher level of autonomy provokes a higher attribution of agency to the robot.

2 Background and Related Work

Human–robot collaboration refers to a collaborative accomplishment by humans and robots in completing tasks with a focus on coordinating close, seamless joint activities between humans and robots [1]. The participants’ mutual engagement enables problem solving that cannot be achieved without the two sides’ direct coordination and interaction. Furthermore, the focus of Human–robot collaboration is not to replace a human by a robot but to complement each other by contributing their strengths toward the mutual goal that is stated by the human agent [33]. Given the increasing interest in understanding dynamics beyond mere implementation, in this study, we discuss the contextual conditions under which collaboration with agentic robots produces negative attitudes toward robots.

Prior studies of robots in work contexts provide some preliminary evidence for the effect of increased agency on acceptance and attitudes toward robots. Stafford reported that people were more likely to use robots when they perceived robots’ minds as having less agency [53]. Heerink et al. reported more anxiety toward a more adaptive robot than a less adaptive one [29]. However, the effects of collaboration with agentic robot are complex and may not lead only to negative outcomes. Different contextual factors may influence the relationship between agency attributed to robots and attitudes toward them. For instance, Wiese et al. show that people were more willing to engage in joint attention with a robot when they treat it as a system with intentionality [66]. A study by Rau et al. found marginal effect of positive effect of level of autonomy the robot’s influence on human decision when the robot was an in-group compared to an out-group member [44]. Liu et al. found that participants preferred to work with a robot that adapted to their actions over one that did not [37]. In another study conducted by Shermerhorn and Scheutz, people attribute greater cooperativeness to a robot in autonomous mode and accept dynamic autonomy when the robot makes autonomous decisions in the interest of team goals, even going so far as to ignore instances of disobedience [48]. While these studies have provided important information about how people interact with autonomous robots, the existing evidence does not conclusively determine how an increased attribution of agency of may affect peoples’ attitudes toward the robot. Given that contextual conditions can shape how individuals develop attitudes toward robots, perceived control may be a particularly important dimension of cognition to investigate.

The contextual variable investigated in this paper is perceived control. Our notion of perceived control is different from other interpretations of control, such as self-efficacy in HRI [43] or perceived behavioral control in the theory of planned behavior [2], which reflects users’ perceived ease of performing a behavior and treats the perceived difficulty and perceived control as the same construct. Researchers argue that these two have different antecedents and should be considered as separate case. Perceived control is determined by a set of underlying control beliefs that mostly capture external factors (situational influence), while perceived difficulty is determined by internal factors (ability and skills) and therefore focuses on self-efficacy [13]. Perceived control is defined as “appraisal of the extent to which other people or events will interfere with the performance of the behavior” [57, p. 202]. It is an individual’s interpretation and belief about how much control is available [52]. Moreover, research on the illusion of control has emphasized the importance of perceived control over locus of control [36]. While locus of control refers to individuals’ general beliefs about the main causes of events in their lives (i.e. external or internal locus of control) [47], perceived control refers to a more situational perceived ability to affect the outcome of a course of action [41, 42]. Few studies have examined the concept of perceived control in the HRI contexts [12, 32, 38]. While these results indicate that people prefer to be in control, it is unclear what this means for human–robot collaboration. In this context, humans can undertake action or decisions indirectly through a robot, which would cause the person to no longer consider themselves as in control of the action. Feeling that one is in control may improve the sense of agency, when the achieved outcome of an action conforms to one’s intention [42]. According to Pacherie, sense of control is one of the main contributor to sense of agency (the sense that an individual is the author of an action) and comprises three more basic experiences: sense of motor control, sense of situational control and sense of rational control [42]. While situational control is perceptual and represents “control of the action with regard to the situation as currently perceived” [42, p. 4], rational control exists at a higher level of abstraction and represents a more global consistency that is not affected by a single event or experience.

We predict that increases in robot’s agency are therefore likely to have a negative affect on attitudes toward robots if humans feel less in control. This proposition receives support from [40], who argues that an increase in the level of autonomy is related to negative emotions in users because it leads them to experience a lack of control. Studies have shown negative arousal in social experiences and environmental conditions that can potentially diminish an individual’s perceived control [69]. Moreover, people who perceive high control are more confident about their performance. People who perceive higher control may feel more comfortable collaborating with a robot because they do not see it as undermining their values or contribution to the task [5]. As suggested by Hinds, when control appears to be in the hand of another entity, people experience negative feelings, which could lead to lower acceptance or use of that system [30]. Moreover, in reactance theory [8], individuals react negatively to threats to their control. Therefore, conditions that reduce the perception of control may lead to motivational and cognitive deficits [54] and consequently negative attitudes. Given that the perception of a robot’s agency increases as its autonomy rises [56], it is certainly possible that the positive relationship between a robot’s attributed agency and negative attitudes is among individuals who feel low control over the task performance. Thus, we expect those who were exposed to agentic robots and who believed that they had little control reported greater negative attitudes toward robots relative to individuals exposed to non-agentic (see Fig. 1).

Fig. 1
figure 1

Our conceptual model in which the perceived control moderates the relationship between attributed agency and attitudes toward robots

Fig. 2
figure 2

Example screenshot of the video vignette that presents a collaboration between a person and a robot

3 Methodology

3.1 Participants

102 participants were recruited through snowball sampling for this study in order to expand the sample size and scope. 9 participants were excluded for failing the manipulation check of attributing the correct amount of agency to the robot (i.e. the video failed to elicit the targeted level of the robot’s perceived autonomy in the participant). One outlier was also detected and removed. Thus, the final sample consisted of a total of 92 participants (46% women and 54% men), ranging in age from 20 to 63 years old (\(M= 32.93\), \(SD= 9.86\)). The majority of respondents were highly educated (i.e. 38% held a master’s degree). Half of the participants (50%) indicated having previous interaction experience with a robot. The data was collected between 5 and 23 February 2020.

3.2 Manipulation

To assess people’s attitudes toward the attributed agency of robots, we simulated a collaboration process regarding assembling a product in a factory using a vignette study. Video vignette was developed in which participants were asked to imagine they are a member of a human–robot manufacturing team; the manufacturer had recently acquired a new robot to work alongside people to improve their productivity. The goal, building a mixer, was set by the manufacturer and assigned to this team. This goal can only be achieved if the two agents collaborate with each other. Given the product’s complexity, some of the tasks will be done by the robot, and other tasks will be done by the human participant. Figure 2 shows a screenshot of the video vignette that participants were asked to watch.

We manipulated level of agency attributed to the robot as a between-subjects factor by varying the level of autonomy in the robot’s behavior. Two experimental conditions (i.e. HIGH agency and LOW agency) were implemented in the robot system to be evaluated. In the LOW agency conditionFootnote 1, the robot reactively participated in decision-making and completion of the product, i.e. the employee made the decisions about the sequence of actions required to assemble the mixer, and the robot required human commands to complete its assigned actions. In the case of a problem or ambiguity, the person in the video had the option of asking for help from the robot, which the robot could only make suggestions about what to do. The robot communicated with the human agent through text on the tablet screen. In the HIGH agency conditionFootnote 2, it was the robot who decided the sequence of actions, told the employee which tasks to carry out and the user only had the right to veto the robot’s decisions. In the case of a problem or work stoppage, the robot proactively shows the person how to do the task without being asked. The main difference between the conditions is that the robot’s behaviour in the former is relatively deterministic, while its behavior in the latter is unpredictable and actions are not suggested but are used in imperative manner to reflect the higher level of autonomy. A pilot test involving 40 students at TU Vienna was carried out to verify the feasibility and validity of the vignette. In addition to question items mention below in the measures section, participants were asked to describe in their own words what they have seen in the video. As a consequence from the pilot test, we added further information about the robot arm at the beginning of the video and questionnaire as some of the pilot participants had difficulties recognizing the interactive tablet belongs to the robot arm.

3.3 Procedure

Participants were randomly assigned to watch one of two videos uploaded as private videos on Youtube. After reading an information sheet that outlined the purpose of the study and providing consent to participate, participants were randomly assigned to watch a video of a person collaborating with either an agentic (HIGH agency) or non-agentic (LOW agency) robot in an assembly task. We asked people to respond from the vignette character’s perspective as if they were that person in that situation, rather than on the basis of their own lives. This should help to reduce the effects of socially desirable response patterns [31]. After watching the video, participants completed a post-video questionnaire. The survey instruments were all provided in German. The questionnaire were constructed using a double translation procedure conducted by two different researchers fluent in both German and English.

3.4 Measures

For the manipulation check, participants were asked to indicate the level of the robot’s perceived autonomy on a 5-point Likert scale ranging from 1 (= low) to 5 (= high).

Attitudes toward robots were measured with 14 items from the Negative attitudes toward Robots Scale (NARS) [39] and 11 items from the Robot Attitudes Scale (RAS) [9]. These scales have been applied successfully to measure the psychological reactions to (human like and non-human like) robots in a general sense but also after interacting with a specific robot (e.g. [50, 65]). The NARS contains 3 subscales that measure negative attitudes towards (1) situations of interaction with robots (NARS-S1) (a sample item is “I would feel uneasy if I was given a job where I had to use robots”), (2) the social influence of robots (NARS-S2) (a sample item is “I feel that in the future society will be dominated by robots”), and (3) emotions in interaction with robots (NARS-S3) (a sample item is “I would feel relaxed talking with robots”). All items were rated on a 5-point Likert-type scale ranging from 1 (= strongly disagree) to 5 (= strongly agree). A higher score on this scale indicates a more negative attitude. The reported Cronbach’s alpha for the NARS-S1, NARS-S2, and NARS-S3 were 0.72, 0.70, and 0.72, respectively. The official German adaptation of the questionnaire was used [7]. The RAS was used to rate what the participants thought of robots on scales from 1–8. Items included the adjectives friendly, useful, trustworthy, strong, interesting, advanced, easy to use, reliable, safe, simple, and helpful. Higher scores on this scale are associated with less favourable attitudes toward robots. Cronbach’s alpha was 0.82.

To measure perceived control, we used 2 items from [30]. A sample item is “I felt that I was in control”. These items were highly correlated (r=0.71, \(p<\) 0.01).

Participants reported basic socio-demographic information such as age, gender(0 = female, 1 = male), level of education and previous experience with robots (0 = no, 1 = yes). A personality trait which has been shown to be related to perceived control is the desire to control [10, 24]. Individual differences in the level of motivation to control the events in one’s life were measured with 20 items from the Desirability of Control Scale (DC) [11]. Cronbach’s alpha was 0.81. All items were rated on a 7-point Likert-type scale ranging from 1 (= strongly disagree) to 7 (= strongly agree).

4 Results

An analysis of standard residuals was carried out to identify any outliers, which indicated that participants 81 needed to be removed. The histogram of standardised residuals indicated that the data contained approximately normally distributed errors, as did the normal P-P plot of standardised residuals. Furthermore, the assumption of homogeneity of variance was checked using Levene’s test. Multicollinearity between the independent variables was examined using the VIF coefficients. The results showed that multicollinearity was not a concern.

Table 1 displays the means, standard deviations, and correlations of our study variables. There was no evidence of a direct relationship between attributed agency and attitudes toward robots, at least expressed in terms of simple association. A weak significant correlation between perceived control and RAS was found (\(r = -\,0.28\), \(p<\) 0.01). As perceived control increases, negative rating of the robot decreases. We also found a moderate significant correlation between the extent to which they attributed agency to the robot and how much they perceived control (\(r = -0.42\), \(p<\) 0.01), meaning that the higher the level of agency resulted in lower perceived control. Significant correlations between gender and RAS (r = 0.21, \(p<0.05\)), gender and NARS-S1 (\(r = -0.34\), \(p<\) 0.01), gender and NARS-S2 (\(r = -0.30\), \(p<\) 0.01), previous experience with robots and NARS-S1 (r = -0.36, \(p<\) 0.01), previous experience with robots and NARS-S1 (\(r = -0.21\), \(p<\) 0.05), perceived control and age (r = 0.24, \(p<\) 0.05), perceived control and gender (\(r = -0.23\), \(p<\) 0.05), and were also found. Age, education level and desirability to control were not associated with attitudes toward robots.

Table 1 Means, standard deviations and correlations for the study variables
Table 2 Moderation analysis results for attributed agency as independent variable

An independent-samples t-test was conducted to compare attitudes toward robots in the LOW and HIGH agency conditions. Lower RAS scores were reported for LOW agency (M= 2.88, SD= 1.01) than HIGH agency (M= 3.37, SD= 1.08) condition; t(90)= \(-\,2.25\), p = 0.03. However, there were no significant differences in the NARS-S1 (HIGH agency: M= 2.10, SD= 0.86; LOW agency: M= 1.86, SD= 0.62; t(90)= \(-\,1.51\), n.s.), NARS-S2 (HIGH agency: M= 2.60, SD= 0.95; LOW agency: M= 2.65, SD= 0.67; t(90)= 0.34, n.s.) and NARS-S3 scores (HIGH agency: M= 3.20, SD= 0.89; LOW agency: M= 3.53, SD= 0.95; t(90)= 1.68, n.s.).

Fig. 3
figure 3

Interaction of attributed agency and perceived control on NARS-S1

To investigate whether perceived control moderates the relation between attributed agency and attitudes toward robots, the SPSS script (Model 1) by Preacher and Hayes [28] was used. The results were tested using 1000 bootstrapped samples and 95 percent confidence intervals. Age, gender, level of education, previous experience with robots and desirability of control were entered as covariates.

As predicted, perceived control moderated the relationship between attributed agency and NARS-S1 (\({ coeff}\). = \(-\,0.10\), \(p<\) 0.01, see Table 2). As Fig. 3 shows, when perceived control was low, there was a positive relationship between agency and NARS-S1. That is, for the individuals who perceived lower control, the more they attributed agency to the robot, the more negative they were regarding situations of interaction with robots.

The moderating effect of perceived control on the relationship between attributed agency and NARS-S2 was also significant (coeff. = \(-0.09\), \(p<\) 0.05, see Table 2). As shown in Fig.  4, when perceived control was low, the relationship between agency and NARS-S2 was positive. That is, for the individuals who perceived lower control, the more they attributed agency to the robot, the more negative they were regarding social influence of robots.

Fig. 4
figure 4

Interaction of attributed agency and perceived control on NARS-S2

The moderating effect of perceived control on attributed agency and NARS-S3 (coeff. = \(-0.09\), \(p<\) 0.05, see Table 2) was significant. However, the model was not significant (F(8,83)= 0.97, n.s.).

Perceived control also moderated the relationship between attributed agency and RAS (coeff. = \(-0.10\), \(p<\) 0.05, see Table 2). Figure 5 shows that when perceived control was low, there was a positive relationship between agency and RAS, indicating that greater attributed agency was associated with less favorable attitudes toward robots among those who felt they have little control over the task.

Fig. 5
figure 5

Interaction of attributed agency and perceived control on RAS

5 Discussion

In this study, we focused on how agency attributed to robots is related to attitudes toward them by investigating the intervening role of perceived control in a collaboration process. In particular, we examined whether perceived control moderates the relationship between attributed agency and attitudes toward robots.

The results of our study confirmed our hypothesis for RAS and two subscales of NARS (i.e. NARS-S1 and NARS-S2) that the relationship between attributed agency and attitudes is contingent on perceived control. For NARS-S3, despite a significant interaction term between perceived control and attributed agency, the model was not significant. We found that when perceived control is low, there is a positive relationship between attributed agency and negative attitudes toward interaction situations with robots as well as the social influence of robots. This indicate that those who attributed relatively higher agency to the robot did report relatively less favorable attitudes toward robots compared to those attributed lower agency, but this was true only among those with relatively low perceived control. These findings can be explained by the stress literature [19, 21], which indicates that lack of control undermines individuals’ ability to cope with a stressful situation and even the mere perception of control (i.e. without control actually being available) can also reduce stress. Consistent with reactance theory [8], our results show that a lack of control results in people having negative attitudes toward robots.

We also observed some demographic differences regarding attitudes towards robots. Consistent with the literature [17, 59], this study found that female respondents and individuals with no previous interaction with robot were more likely to report negative attitudes toward robots. Moreover, this study supports previous observations (e.g. [51, 59]) finding no significant correlation between age and attitudes toward robots.

In accordance with prior studies noting the importance of individual subjective attributions of agency [18, 53, 69] in explaining attitudes toward robots, the aim of our study was to further investigate psychological factors, such as perceived control, involved in the acceptance of robots. Numerous efforts are focused on including humans in the decision-making loop to improve the quality of task plans and schedules for autonomous systems [4, 15]. To our knowledge, the current study was the first to empirically investigate the role of perceived control and human oversight in a human-robot collaboration context. Our results yield important insights about how human cognition is developing alongside the robots we are creating, helping us to understand factors that facilitate or hinder their social acceptance. Given that individuals’ preference for control has been found to increase over time as they get used to the robot [32], it is necessary to provide cognitive and behavioral resources for individuals who work alongside robots. Thompson identifies factors that can enhance perceptions of control, such as assessing skill acquisition, costs and benefits, the accuracy of self-efficacy expectations, etc. [58]. Accordingly, we can infer that if the context encourages individuals to perceive more control in their collaboration with robots, the relationship between attributed agency and positive attitudes should be positive. In this study, we found that female and younger respondents are more likely to experience higher perceived control. A further study with a focus on other factors affecting the perception of control in human-robot collaboration is therefore suggested. Furthermore, this study showed that perception of control could be induced more strongly when actions are selected by the individual as opposed to instructed by a robot. A further potential direction for future studies could be to determine how the valence of an action can affect the perceived control. Experimental research on neurobiology have shown that unpleasant outcomes lead to lower feeling of control and consequently lower sense of agency in people compared to positive outcome [6, 67]. These mechanisms need to be further investigated in HRI context, as recent studies [14, 46] found that the social presence of robot reduces the sense of agency over self-generated actions. Since enhancement of human agency is necessary to protect human rights [61], further studies critically investigating the effect of robots on our sense of agency and perception of control would be worthwhile.

This study is subject to several limitations. First, there is a need to investigate interaction contexts where participants are interactants and not observers. The social interaction literature suggests that social cognition may be different when the person is in the role of an interactant rather than an observer of interactions [49]. Our results therefore need to be interpreted with caution. Future research could seek to verify our current findings in real human-robot interaction scenarios to gain a better understanding of the buffering role of perceived control. Second, the analysis was based solely on Austrian data, and the results cannot be generalised directly to other countries. The literature suggests that responses to robot-related attitudes questions reflect respondents’ individual experiences, and perspectives and may be susceptible to cultural differences [39, 55] and a country’s technological orientation [59]. However, it is unlikely to suppose that control perceptions would be dependent on culture or country in the same manner. In addition, this study focused on perceived control and tested how it acts as a boundary condition for the positive relationship between attributed agency and attitudes toward robots. As individuals’ reaction to perceived control rests on their motive to control [10], future research might integrate desire for control in the model and investigation the effects of match and mismatch between perceived control and desire for control.

Consistent with other studies [35, 44], our research further highlights the importance of including both technical and social aspects in designing robots. The findings may help illuminate the process by which agency attributed to a robot is linked to the development of negative attitudes towards them. When individuals feel in control and believe that they themselves determine the task outcome and not others’ actions or external factors, they tend to feel more comfortable in collaborating with robots. Thus, beyond understanding whether the attribution of agency leads to negative attitudes toward robots, future studies should consider the nature of perceived control in the collaborative context, which explains when agency attributed to the robots is associated with positive attitudes.

6 Conclusion

The purpose of the current study was to examine conditions under which a robot’s attributed agency is associated with negative attitudes toward robots by addressing the role of perceived control. The results indicate that increases in attributed agency are associated with negative attitudes toward robots when individuals feel lack of control in a collaboration context with robots. The findings suggest that perceived control can mitigate negative attitudes and foster social relationship between humans and robots.