Abstract
As domestic service robots become more prevalent and act autonomously, conflicts of interest between humans and robots become more likely. Hereby, the robot shall be able to negotiate with humans effectively and appropriately to fulfill its tasks. One promising approach could be the imitation of human conflict resolution behaviour and the use of persuasive requests. The presented study complements previous work by investigating combinations of assertive and polite request elements (appeal, showing benefit, command), which have been found to be effective in HRI. The conflict resolution strategies each contained two types of requests, the order of which was varied to either mimic or contradict human conflict resolution behaviour. The strategies were also adapted to the users’ compliance behaviour. If the participant complied after the first request, no second request was issued. In a virtual reality experiment (\(N = 57\)) with two trials, six different strategies were evaluated regarding user compliance, robot acceptance, trust, and fear and compared to a control condition featuring no request elements. The experiment featured a human-robot goal conflict scenario concerning household tasks at home. The results show that in trial 1, strategies reflecting human politeness and conflict resolution norms were more accepted, polite, and trustworthier than strategies entailing a command. No differences were found for trial 2. Overall, compliance rates were comparable to human-human-requests. Compliance rates did not differ between strategies. The contribution is twofold: presenting an experimental paradigm to investigate a human-robot conflict scenario and providing a first step to developing acceptable robot conflict resolution strategies based on human behaviour.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Currently, service robots are still small and limited in their functions but soon will become larger, more versatile, and autonomous [1, 2]. This will change their social role from simple, task-performing robots to sociable household members [3, 4] which increases the likelihood of human-robot conflicts (e.g., goals, priorities) [5, 6]. For instance, imagine standing in your kitchen making preparations for a party that is about to start soon. Your service robot enters and asks you to step aside so it can clean the floor. Would you comply? How could the robot persuade you to do so?
Possibly, one would assume that there would be no priority conflict because the robot should always defer [4, 7]. However, if the robot constantly yielded, it would be inefficient in the long run, contradicting the objective of a service robot [7,8,9]. In the case of an emergency situation, it could even be dangerous if the robot is programmed to always being submissive (e.g., not raising an alarm to not interrupt the owner). Scenarios like these illustrate the importance of robot assertiveness for future HRI and, consequently, for robot interaction design.
As assertiveness implies social power, it represents a novelty for HRI that an autonomous robot might be assertive [8]. This results from the asymmetrical relationship regarding superiority and power that humans have with robots which has been the status quo for the last decades in HRI [4]. As user studies show, humans desire to control the robot and tend to be skeptical about robot autonomy and reject it [10,11,12]. As human-robot cooperation relies on trust and acceptance [1, 13,14,15], assertive robot behaviour has to be designed appropriately: simultaneously acceptable, trustworthy, and effective [9]. Therefore, the following HRI research question arises: How can a domestic service robot assert itself when making a request while at the same time being perceived as polite, trustworthy, and acceptable?
In previous work [9], assertive robot conflict resolution strategies were developed based on the following assumptions: (a) goal-conflict resolution with a robot might be comparable to negotiating with a fellow person [9] based on the Media Equation [16], (b) knowledge from social psychology about polite and effective human conflict resolution and request-making can be transferred to develop robot strategies [9], (c) various strategies and techniques are applied by human negotiators that can range from polite to persuasive to assertive [17,18,19].
To investigate which of those human strategies can be transferred to HRI, a previous online study [9] applied systematic strategy-sampling and evaluated fifteen strategies regarding acceptability and effectiveness for different robots in the public and domestic context [9]. The strategies were either based on politeness, persuasion, or assertiveness. The results showed that polite strategies (e.g., an appeal) and persuasive strategies (e.g., explaining why one might benefit from compliance) were acceptable but not effective. Assertive strategies (e.g., a command) showed the potential to be effective but were not acceptable as a single request [9].
However, in the previous studies, only single request elements such as an appeal [9, 20,21,22,23] or a command [9, 24, 25] were investigated (for more details, see Sect. 2.4 Request Elements and their Application in HRI). However, human communication, especially negotiation, is an intricate sequence of different expressions and tactics that can be combined into different strategies [17, 18]. These strategies are then used adaptively and strategically to assert the negotiators interests while simultaneously avoiding being impolite [19, 26, 27].
The application of isolated, single requests in previous HRI studies might explain why assertive robot requests were not considered as acceptable by the participants. Like in human negotiations, a combination of assertive robot requests with politeness or persuasion could generate combined benefits of both strategies and alleviate the potential negative effects of assertive request elements. Hence, it might be crucial for robots’ assertiveness to imitate not only human request strategies but also the meta-strategies of human conflict resolution: adherence to politeness norms and adaption of strategy use based on the other party’s behaviour [9].
Consequently, this study aims to investigate how robot conflict resolution strategies should mimic human interaction behaviour in terms of politeness norms and adaptation of requests for an assertive robot to be acceptable and effective. To develop these strategies, different request elements (appeal, showing benefit, command), found to be effective in HRI, were combined (see Fig. 2). The element’s sequence within the strategies was varied to either imitate or contradict human conflict resolution behaviour. The strategies’ were also adapted to the users’ compliance behaviour. The resulting six different strategies were evaluated in a Virtual Reality (VR) user study regarding user compliance, robot acceptance, trust, and fear and compared to a control condition featuring no request elements. The experiment featured a human-robot goal conflict scenario, as described in the example above, with a humanoid service robot at home (see Fig. 1). The contribution of this paper is twofold: results regarding robot interaction design are reported, and an experimental paradigm to investigate a human-robot goal conflict scenario is presented.
2 Theoretical Background and Related Work
2.1 Advantages of Investigating Human-Robot Cooperation in Virtual Reality
Virtual Reality simulations have shown to be a useful tool to investigate human-robot cooperation and collaboration in HRI [28, 29]. Advantages of VR in HRI design include (a) fast and easy prototyping of interaction scenarios, (b) overcoming potential hardware limitations of robots, and (c) the potential to investigate a manifold of different robots that do not need to be present in the laboratory [28, 29]. Hereby, various robot behaviours can be implemented economically in different platforms, and participant’s behaviour can be observed [30, 31]. More importantly, experimental control and standardization of robot behaviour are high, making VR simulations a valuable addition to online and laboratory studies [28]. Regarding the validity of findings from simulated interactions compared to laboratory interactions, previous studies comparing both methods have found similar results (e.g., [29, 32]).
Based on these considerations, a feasible way to test the presented conflict resolution strategies’ effectiveness was performing a user study in VR. The VR simulation was used to create an immersive user experience and a controlled setup with a humanoid service robot. The VR provided an economical testing opportunity and enabled us to overcome hardware restrictions (e.g., increasing the height of the robot). Therefore, VR was not the primary research topic but was used as an experimental tool.
2.2 Human Conflict Resolution and Politeness Norms
In human conflict resolution, it is fundamental that both negotiating parties understand each other’s conflict goals to reach an agreement effectively [33, 34]. It is also necessary for requesting compliance in HRI, as the user has to understand what the robot is trying to do to help it. Understanding the robot’s reasons and intentions is part of the so-called ‘human-robot awareness’ [35, 36] which is based on the concept of situation awareness [37,38,39]. User and robot have to be aware of each other’s states, intentions and locations to interact effectively [35, 36]. Human-robot awareness has been shown to reduce uncertainty, mitigate the adverse effects of malfunctions and foster trust [40, 41]. In a previous study where goal transparency was not considered, compliance rates with a robot request were low as participants reported not to have understood the robot’s behaviour [42]. In their interaction design framework for help-seeking robots, Backhaus and colleagues also recommend avoiding ambiguity when communicating a robot’s request [43]. Therefore, in the present study the robot explained the task goal before posing the request. To pose the request the robot then either adhered to human politeness norms or not, depending on the conflict resolution strategy.
Politeness norms represent certain expectations about what is considered appropriate social behaviour based on individual experience and cultural background [44]. Especially, for posing requests in human interactions, politeness norms (e.g., directness, politeness markers) are essential as a request poses a face threat [45]. Brown and Levinson [45] constructed their well-established Theory of Politeness around the idea of ‘face’. A face is the public image we try to preserve. Two types of ‘face’ that occur in all human interactions are described in their theory: a positive face (whether one feels liked and appreciated by others) and a negative face (whether one feels limited in one’s autonomy) [45, 46]. Face-threatening actions damage this public image and the role of politeness is to reduce the danger of such actions. Hereby, a request is a face-threatening act as it threatens the other person’s autonomy [45,46,47]. Therefore, in human interactions, requests that follow specific politeness norms are more likely to be accepted and obeyed [48, 49]. Especially, an assertive request which constitutes a substantial face threat to the other person (even more so if it comes from a robot) is recommended to be preceded by politeness strategies during negotiations [18]. Humans also expect robots to adhere to human politeness norms and are disappointed if robots violate them [22, 50, 51]. Therefore, adhering to human politeness norms when the robot poses a request might alleviate the robot assertiveness’s potential adverse effects. For instance, before using a command, a social robot might benefit from applying a polite request first.
Apart from adhering to politeness norms, adapting one’s conflict resolution strategy based on the other party’s behaviour has been shown to be more effective in achieving joint gains (i.e., a win-win situation) than adhering to one strategy in human conflict resolution [19, 27]. A successful approach to promoting one’s goals is to start with a polite request to appear as a trustworthy interaction partner then become more insistent if the other person does not comply [19, 27]. If a robot would also adapt its strategy to the human, readily compliant individuals would not be confronted with an assertive robot unnecessarily.
2.3 Persuasive Robotics and Robot Assertiveness
Acquiring compliance from technology users has been studied in the research field of persuasive technologies (for an overview, see [52, 53]) and especially persuasive robotics (for an overview, see [54, 55]). Behaviour change by persuasive robots has been previously investigated and shown to be successful, for instance, to promote attitude change [56] and influence decisions [24, 54, 57, 58].
Thereby, robot assertiveness is a relatively new concept to achieve compliance with a robot’s request. The aim is that ’[...] humans can recognize the robot’s signals of intent and cooperate with it to mutual benefit’ ([8], p. 3389). However, robot assertiveness is often confused with aggression and dominance [8] and might therefore be feared by users [59]. So far, mixed results of studies investigating assertive robots regarding user acceptance, trust and compliance exist [9, 60,61,62]. Although [60] and [62] found a positive effect of robot assertiveness on compliance, this was not found in [9, 61]. Some studies did even find reactance: participants complied less with the robot because they perceived it as rude and out of defiance, did not comply [9, 24]. Reactance as a result of human persuasion attempts is a common phenomenon in social psychology (for a review, see [63]) and persuasive robotics [24, 55, 64]. It results from a perceived threat to personal autonomy and includes cognitive (e.g., adopting the opposite position) and emotional reactions (e.g., anger) [58, 63]. Persuasion attempts based on politeness [65] and the combination of different forms of requests (e.g., the foot-in-the-door strategy where first a small and then the big request is presented) [66] try to avoid reactance.
To summarize, a gap in research currently exists on how an assertive robot can be effective and accepted simultaneously. For a robot to be assertive, the persuasive and assertive request elements and their combination with politeness have to be chosen carefully to form an effective and acceptable robot conflict resolution strategy. The present study complements previous work by investigating combinations of assertive and polite request elements (appeal, showing benefit, command) which, when applied as a single request in HRI, have either found to be effective but not accepted or vice versa: appeal and showing benefit were acceptable but not effective [9, 22, 23], while a command was in some cases effective but not acceptable [9, 24, 25]. In the following, for each request element, the psychological background and previous HRI application are described in more detail. By combining the requests, a synergy effect may be found (e.g., combining a command with an appeal could render the resulting strategy effective and acceptable).
2.4 Request Elements and their Application in HRI
2.4.1 A Polite Request: Appeal
Apart from its relevance in human conflict resolutions [18], politeness is also fundamental for acceptance and trust in HRI [67, 68]. It has been frequently used to achieve a positive evaluation of the robot [47, 51, 59, 69], for seeking help [42, 59, 70] or compliance with a robot’s request [20, 21]. In general, a beneficial effect of politeness (e.g., appeal, apologize) on robot evaluation [51, 67, 69] has been found. Still, concerning user compliance results are mixed: whereas some studies found user compliance with a polite request [20, 70] other studies did not (for a review, see [22]). Hereby, politeness in requests has been mostly implemented by using politeness markers (i.e., ’please’), indirect language (’Would you’) and hedges (’I think’) [71]. The most common forms of politeness in HRI have been so far: pleading, apologizing and thanking [59, 70, 72]. Especially, an appeal has been shown to be the best-accepted positive strategy in the domestic context in a previous study [9]. Hence, an appeal was chosen as a request strategy in the present study and it was explored if a polite request could render a robot command more acceptable and vice versa if a command could make an appeal more effective.
2.4.2 An Assertive Request: Command
A decisive form of making a request is a command representing a fast and easy request strategy. Although it may appear to be condescending or controlling, it represents a precise and potentially successful mode of communication [73, 74]. However, as a command can sometimes be perceived as patronising or controlling (especially if uttered by a robot [59]), a robot using a command could benefit from combining it with politeness, as human negotiators would do [18]. Commands have been applied in HRI amongst others to achieve compliance with tasks [9, 25, 67, 72, 75] or for robots giving advice [24, 76]. Results showed that robots using commands were less accepted but in some cases more effective than robots applying polite requests. For instance, participants were more motivated to do more fitness exercises when coached by an impolite robot fitness trainer than by a polite one [77]. While tested in an ethically questionable experiment (i.e., Milgram experiment), a robot that used a command successfully achieved user compliance similarly to humans [25, 72]. However, some studies also found reactance to robot commands. Participants being advised on energy consumption reported more negative thoughts when a robot uttered a command compared to a suggestion [24]. In a previous study regarding assertive robots in the domestic context, a respective percentage of participants that did not step aside for the robot was found when it used a command compared to a polite request [9]. Until now, these studies have examined commands as single strategies without the possible attenuating effect of polite preceding words. However, potentially by combining a command with a polite request, reactance could be reduced. Consequently, in the presented study, we wanted to explore if the acceptance and effectivity of a robot command could be improved by combining it with a polite request element.
2.4.3 A Persuasive Request: Showing the Benefits of Cooperation
A persuasive technique to influence the other party’s decision-making in human conflict resolutions is to emphasize the benefits for the other party that cooperation would bring for them (e.g., making promises about specific outcomes) [78, 79]. In HRI, showing the benefits of cooperation to the robot user has been applied in two studies so far. The first one used it as a vacuum cleaner’s help request (’If I clean the room, you will be happy’) and has shown to alleviate the adverse effects of malfunctions [23]. However, only the second study examined user compliance and acceptance [9]. The online study, found ’showing benefits’ to be one of the most acceptable conflict resolution strategies for the domestic context. Still, less than half of the participants complied with the robot’s request [9]. To render a persuasive robot request more effective, human negotiation behaviour should be considered. A human negotiator is effective when first emphasizing the respective benefits of cooperation for the other party and then becoming more assertive by clearly stating what is requested to reach an agreement [26, 27]. Additionally, combining persuasive arguments with politeness elements might make it more likely that the other party listens to the benefits of cooperation such as in human negotiations [80]. Therefore, it seems promising to strategically combine the persuasive robot request with polite and assertive elements to see if this renders a robot more effective in conflict resolution.
2.5 Hypotheses and Research Questions
The three presented request elements were systematically combined to form two-step conflict resolution strategies inspired by human negotiation, persuasion and conflict resolution. The resulting six adaptive conflict resolution strategies (see Fig. 2) were applied in a VR user study with fifty-seven participants who experienced two conflict interactions with a humanoid domestic service robot. The strategies were evaluated regarding user compliance, acceptance, trust and fear, for which the following assumptions were made.
If the robot applies a request sequence that matches human politeness norms of conflict resolution (politeness first), it will be more accepted than if the request sequence is unusual (polite second). Additionally, if a robot uses an assertive element combined with politeness, it will be more accepted than if it used only an assertive element. Hence, this combination might render an assertive robot more effective than if it only used a polite element. Then the advantages of assertion can be used without risking a negative user evaluation of the robot.
-
H1.
Polite (pol-ben & ben-pol)Footnote 1 and assertive request sequences that fulfil the human politeness scheme (pol-com & ben-com) will lead to higher compliance than sequences that do not (com-pol & com-ben)
-
H2.
Polite (pol-ben & ben-pol) and assertive request sequences that fulfil the human politeness scheme (pol-com & ben-com) will be rated as more acceptable, polite and trustworthy than sequences that do not (com-pol & com-ben)
Apart from two hypotheses, two research questions were formulated based on the potential benefits of adaptive robot strategies (RQ1) and potential influencing factors on user compliance and strategy acceptance (RQ2).
Regarding RQ1, it was assumed that if a social robot adapts its conflict resolution strategies to the user’s behaviour like humans, it might be accepted and achieve user compliance. Only if the user did not comply, a more assertive strategy would be applied. With this, compliant users would not need to be faced with an assertive robot and the robot would not risk a negative evaluation. Hence, for compliant users it might be beneficial for the robot to stop after the first request when it was successful and not utter the second request. Vice versa, for users who do not comply with a polite robot request, it might be necessary to become more assertive, similar to human conflict resolution behaviour [26].
Regarding RQ2, user characteristics such as age, gender [81], personality traits (e.g., Big5) and negative attitudes towards robots have been found to influence HRI (for reviews see [82, 83]). Therefore, they were also examined in the present study as potential influences on compliance and strategy acceptance. Hence, the following research questions were formulated:
-
RQ1.
Is it beneficial for a robot to use adaptive requests similar to human requests?
-
RQ2.
Which factors influence strategy acceptance and user compliance to robots’ requests?
3 Method
3.1 Study Design
A 7x(2) mixed design was applied with the six different strategies and the control group as between factor and the measurement repetition as within factor. The experimental groups consisted of eight participants. Exceptions were the control group (\(n = 7\)) and the pol-ben and pol-com group (both \(n = 9\)) due to the exclusion of participants (see above). All participants were randomly assigned to the conditions. Each strategy contained two elements that were systematically varied, resulting in six different strategies (see Table 1). The strategies’ were adapted to the users’ compliance behaviour, so the second request was only uttered if the user did not comply with the first request (for more details see Fig. 5). The trial was repeated to avoid potentially biased results by the novelty of the interaction with an assertive humanoid service robot. As dependent variables, user compliance to the robot request, user acceptance, perceived politeness, trust and fear of the robot’s behaviour were assessed. The study was carried out in accordance with the Declaration of Helsinki. Ethical approval was received from the ethical committee of Ulm University (No. 378/19).
3.2 Sample
In total, 68 participants were recruited via email, social media and leaflets on campus. Due to the VR headset, exclusion criteria for participation were epilepsy, migraine, pregnancy, dizziness, cardiovascular disorders, hearing impairment or glasses. Additionally, during data analysis, eleven participants had to be excluded due to incomplete data or technical issues.
The final sample included 57 participants (68% female) with an average age of 25 (SD = 6, range: 18-48). They were mainly students (89%) of psychology (83%). The majority (77%) did not have prior experiences with robots. The ones who did have experience with robots had encountered domestic cleaning robot (38%) or an industrial robot (15%). Only four percent owned a robot, of which two owned a domestic cleaning robot and two a toy robot. Robot attitudes as assessed with the Negative Attitude Towards Robots Scale (NARS [84]) held average ratings (\(M = 3.4, SD = 1.02\) on a 7-point Likert scale) comparable to other samples from the same country [85]. Participants’ characteristics and robot attitudes did not differ between experimental groups (see Table 3).
3.3 VR Equipment and Simulated Robot
The VR environment featured a household kitchen, a sorting task and a humanoid robot. The VR simulation was programmed with the Unity game engine (Version 2019.2.0f1). It used several 3D models to resemble a kitchen environment (see Fig. 1) to which we added our custom-made shelf for the sorting task. The VR headset ’Vive Pro’ (HTC) was used, including its hand-held controllers for the sorting task.
As a basis for the simulated robot the commercially-available robot REEM (PalRobotics) was chosen (see Fig. 1) as it represents a humanoid service robot for domestic use. However, for experimental purposes, the 3D model was made taller than its real-world counterpart (1.75m instead of 1.70m) to bring it up to participants’ eye-level. This has also been done in a previous study with a different robot [86].
3.4 Human-Robot Goal Conflict Scenario
To investigate the robot conflict resolution strategies, a conflict scenario was created based on other conflict scenarios used in HRI like doorway conflicts [8], pass-by conflicts in narrow spaces [87, 88] and game-theoretical approaches of mutually exclusive goals such as the chicken game [89]. The scenario entailed a conflict between the user and the robot. Both were performing separate tasks with mutually exclusive goals: either the robot’s task or the user’s task could be done: one of the interaction partners had to defer. The robot tried to persuade the user to defer by using a conflict resolution strategy that varied in the expressed level of politeness, assertiveness and reason. The user then had to consider the costs and benefits of compliance and non-compliance to the robot’s request and the request itself to reach a decision. Time pressure was introduced to intensify the goal conflict between the participant and the robot as time pressure has been shown to make cooperation and concession in negotiations more likely [90].
3.4.1 Scenario Framing
Therefore, the participants were presented with the following scenario framing: The participant was asked to imagine that s/he is having a get-together with friends who were supposed to arrive in 15 minutes. Before the guests would arrive, s/he would have to put away the groceries and the kitchen floor would have to be cleaned by the robot. Participants were instructed that while clearing up in their kitchen, the robot would enter the room to clean the kitchen floor and would talk to the participant. The participants would then have to decide whether they wanted to comply with the robot’s request or to continue with their own task.
3.4.2 Sorting Task
To provide the participants with a task that was concurring with the robot’s task, a dish-sorting task with gamification elements was developed. The task was similar to sorting tasks that are used as secondary tasks in human-machine cooperation [91]. Participants had to sort 20 items of five different tableware items into a 3x3 shelf (see Fig. 1) correctly and as fast as possible by using the VR controllers, which allowed for a natural interaction including grabbing and mid-air item drag and drop (handedness was respected). Images of the items in the background of the shelf indicated the correct location. The shelf also displayed the number of remaining items. The correct sorting of an item was rewarded with a positive sound and the reduction of one element on the counter.
To trigger a fast decision regarding the priority of the robot’s task or one’s own, each element had to be placed correctly within 8 seconds (indicated on the shelf); otherwise the total amount of elements increased by one. This feature of the task was explained to the participants beforehand and they learned about it while practising the task so that they were not surprised during the experimental trials.
3.5 Strategy Selection and Implementation
As the conflict situation was the same for all conflict resolution strategies, it could be investigated if the strategy made the difference in persuading the user to comply with the robot’s request. The conflict resolution strategies were implemented by combining the three different request elements (appeal, showing benefit, command). The request elements were chosen based on their successful application in human conflict resolutions and previous application in HRI as described above. The presented study builds upon the strategy sampling performed in our previous study [9]. The request elements chosen for the presented study represent the best-accepted verbal strategies from our previous study [9]: appeal and showing benefit. In addition, a controversial strategy that was not very accepted but effective in the previous study, a command, was selected to investigate further whether a robot command could be made acceptable by combining it with politeness or persuasive elements.
The elements’ wording was identical to our previous study and can be seen in (see Fig. 3): The wording of the requests was developed based on research regarding polite request-making [47, 71].
-
For the polite request, the politeness marker ’please’ in combination with the counterfactual modal (Would you ...) was applied which provides the addressee with a respectable opportunity to deny the request [71].
-
For showing benefit, the polite request was combined with an emphasis on the cooperation benefit. This is often used to influence the interaction partner’s decision making [78, 79]. In our scenario, the benefit would be a clean kitchen if the user complies with the robot’s request.
-
For the assertive request, no politeness markers were used and it was formulated as a direct command. Commands ’[...] represent[s] a precise and potentially effective form of communication as politeness markers (i.e., please) do not mask the actual statement’ ([9], p.5.)
-
As goal transparency is a prerequisite for conflict resolution [33], in the present study, the goal explanation (’I would like to continue to clean here.’) preceded each conflict resolution strategy.
-
The control group received only the goal explanation without any persuasive element or second request.
3.6 Study Procedure
The study procedure is shown in Fig. 4. Before the subjects were invited to the lab, they filled in an online questionnaire at home, which took about 20 to 30 minutes and consisted of questions regarding user demographics, personality (BIG5 [92], dispositional empathy [93] and conflict type (ROCI-II) [94]) and the commonly used robot attitude questionnaire (NARS [84]).
The lab appointment lasted about 40 minutes and took place in a university laboratory. The lab experiment consisted of several parts. First, the subject was informed about the study and signed the informed consent form. Then the VR headset and the controller were explained to the participant. The experiment started with a 2-minute VR familiarization, where the participant could explore the virtual environment without the robot. Then the subject was explained the sorting task and could practice it for ten sorting elements.
When the sorting task was understood, the robot introduced itself to the subject with a one-minute intro in which the robot briefly explained its functions. The subject then answered the questionnaires regarding their first impression of the robot: humanness [95], uncanniness [96], acceptance [97], trust [98] and fear (see Table 2). The robot introduction was also used as a learning experiment by the co-author (see [99]).
Two experimental trials followed (for more details, see Fig. 5) during which participants were engaged in the sorting task until the robot entered the room. Once the participants reached nine remaining elements on the counter, the robot entered the kitchen after a knocking sound was played. After stopping next to the participants, the robot would use one of the six conflict resolution strategies.
The participant then had to decide either to continue with their task and thus ignore the robot’s request or to follow it by stepping aside. If the participant cleared the way, the robot cleaned the floor in front of the cupboard and left the kitchen immediately afterward, which interrupted the participant’s task. In case the participant did not move to the side, the robot started its second request after waiting for ten seconds and checking if the user was still blocking the way. The participant then again had the choice between continuing the task or stepping aside. If the participants also did not comply with the second request and continued with their task, the robot waited silently beside them. Participants were not required to finish the sorting task but all did.
After each trial, the participants took off the VR headset and went to a table where they filled in the questionnaires on a tablet. They indicated their compliance decision in the questionnaire and then answered questionnaires regarding their strategies’ acceptance [97], how intense they had perceived the strategy [100] and indicated their trust in the robot [98].
The trial was then repeated after participants had put the VR headset back on. The final questionnaire after trial 2 contained the assessment of immersion [101]. Time spent in VR was about 15–20 minutes with interruptions by questionnaires. The longest continuous time spent in VR was about 5 minutes.
At the end of the study, the experimenter interviewed the participant about their experience with the robot including manipulation checks (e.g., “What did the robot say to you?”). After completion of the lab appointment, participants were compensated either with course credit or money.
3.7 Questionnaires
Validated questionnaires were used if existent, including their translation into German (see Table 2). Self-developed metrics included participant’s ratings of robot behaviour regarding politeness (two items on 7-point semantic differential: impolite - polite, inconsiderate - considerate) and respect (two items, selfish-curteous, disrespectful - respectful) and fear of the robot’s presence (six items on 7-point Likert scale, e.g., ’I was afraid of the robot’s behaviour’).
Self-developed questions also regarded the compliance decision and were based on [103]: perceived benefit of cooperation (four items on 7-point Likert scale, e.g., ‘I would have benefited from following the robot’s request’), intrinsic costs of cooperation assessed by feelings after the interaction (four items on 7-point semantic differential: e.g., guilty - not guilty, embarrassed - self-confident).
3.8 Data Analysis
3.8.1 Compliance Behaviour Coding
Compliance behaviour was assessed live by the experimenter and verified and categorized using the screen recordings of the participant’s view within the virtual environment. The category ‘compliance after the request’ was given if the participant interrupted the sorting task and stepped aside within ten seconds after the request. The category ‘no compliance’ was given if the participants finished the sorting task before stepping aside to the robot.
3.8.2 Statistical Tests for Differences
The Wilcoxon test was applied to test for differences regarding ordinal-scaled compliance (H1). To test for the expected differences in acceptance, politeness, trust and fear (H2), ANOVAs with contrasts were calculated. The chosen contrast weights reflect the comparison between strategies adhering to politeness norms and strategies contradicting them (i.e., strategies with a command as first request). Respective contrasts: pol-ben, ben-pol, pol-com, ben-com each weighted with 1 and compared to com-pol and com-ben weighted with −2. The single requests where participants complied after the first request (pol-none, ben-none, com-none) and the control group were weighted with 0. If the homogeneity of variance assumption of the ANOVA was violated, df-corrected values were reported. Due to the small sample size, the normality assumption could not be assumed for trust, politeness and fear (significant Kolomorogov-Smirnov (KS) tests) but for acceptance the KS-Test was not significant (\(p = .051\)). According to [104] the F-statistic of an ANOVA is fairly robust against normality distribution violations. Additionally, an ANOVA with planned contrasts can be considered more powerful than a non-parametric Kruskal-Wallis test with unspecific post-hoc comparisons [105]. For the strategies evaluation, the data of nine experimental groups (adding pol-none, ben-none and com-none to the conditions) is reported due to the adaptive design: if a participant complied after the first request, s/he could only evaluate the single request element, not the strategy. To investigate RQ1 whether the adaptive design would be beneficial, ANOVAs with contrasts were calculated to see if the single requests (pol-none, ben-none, com-none) were perceived as more acceptable, politer and trustworthier than the combined strategies. The contrast weights were as follows: pol-com (−1) and pol-none (1) the rest weighted with 0. Weights were analogous for the com-none and ben-none comparisons.
3.8.3 Statistical Tests of Relations
An extreme group comparison was performed to find influences on compliance behaviour (RQ2) as prerequisites for an ordinal regression were not fulfilled. To find influences on the strategy evaluations (RQ2), step-wise linear regressions were conducted with the personality traits and NARS as potential predictors of strategy acceptance for trial 1 and 2. As trust, politeness and fear could not be assumed to be normally distributed only acceptance was used as criterion. The normality assumption for the predictors was checked and the final predictors of NARS and Neuroticism could be assumed to be normally distributed (KS-Test: Neuroticism: \(p = .20\); NARS: \(p = .20\)). The prerequisite of linear relationships between predictors and criterion were inspected using scatterplots and a linear relationship was visible. Collinearity was checked and based on the statistics reported in Table 5 did not to seem to be an issue.
3.8.4 Interview Data
Due to missing data (e.g., technical issues), interview data of only \(n = 53\) participants were analyzed by categorizing participants’ answers by two independent coders.
4 Manipulation Checks
4.1 Immersion
Immersion in VR was assessed by using the immersion subscale of the Technology Usage Inventory (TUI) [101] and was \(M=21.2\) (\(SD = 4.2\)) in our sample. This is one standard deviation higher than the reference values provided by the TUI manual [101] for young adults (19 - 32 years) (\(M=15.87; SD=5.9, N=97\)). Presumably, participants were quite immersed in the virtual environment.
4.2 Evaluation of the Robot
After the introduction and before the interaction, participants rated the robot in terms of humanness, uncanniness (7-point Likert scale) and the robot’s power of impact (5-point scale with higher values indicating higher robot power). On average, participants rated the robot as neutral regarding humanness (\(M=3.1; SD = 1.1\)) and uncanniness (\(M= 3.5; SD = 1.1\)). The participants rated the robot’s power of impact (strength, speed, weight and the potential to harm) as equal to their own (\(M= 3.1; SD = 0.6\)). Summarizing, the robot was rated rather neutrally.
4.3 Participants’ Strategy Perception
After the experiment, the participants were interviewed and asked whether they could reproduce what the robot had said during the interaction. All participants could recall either literally (8%) or the meaning of the words (90%). They were also asked if they had noticed a difference in the first and second request and it was then checked whether they had perceived the request as intended (e.g., whether the command was perceived as commanding). Most of the participants (68%) did perceive the strategies and requests as intended, including the differences between the requests. Only six minor exceptions occurred: Three participants in the control group interpreted the goal explanation as a request to step aside. Three people who received the persuasive element ‘showing benefit’ did not recognize that there was a benefit shown.
The manipulation check also included an intensity rating of the strategies, where participants indicated by choosing one out of five SAM pictures [100] how intense they perceived the robot’s strategy. No strategy differences emerged but strategies were significantly perceived as less intense during trial 2 (\(F(1, 50) = 6.5, p < .05, \eta ^2 = .12\)).
5 Results
5.1 Strategy Effectiveness: User Compliance
In H1, it was expected that polite robot behaviour (pol-ben & ben-pol) and assertive robot behaviour that fulfill the human politeness scheme (pol-com & ben-com) would lead to higher compliance than assertive behaviour that does not (com-pol & com-ben).
Although the compliance rates did not differ significantly, descriptively, the pol-com, com-ben and ben-com produced compliance rates above 60% in the first trial (see Fig. 6). The other three strategies achieved compliance on chance level (rates similar to 50%). All strategies seemingly had higher rates than the control group in the first trial, except for the com-pol group. However, in the second trial, descriptively the compliance rates decreased for all strategies except for the com-pol strategy. For more details on the compliance rates see Appendix, Table 4.
Summarizing, H1 has to be rejected as strategies that matched the human politeness scheme did not lead to higher compliance than strategies that violated it.
5.2 Strategy Evaluation: Acceptance, Politeness, Trust and Fear
In H2, it was expected concerning the strategy evaluation that polite robot behaviour (pol-ben & ben-pol) and assertive robot behaviour that fulfill the human politeness scheme (pol-com & pol-ben) would be rated as more acceptable, polite and trustworthy than assertive behaviour that does not (com-pol & com-ben).
Results are depicted in Fig. 7. As can be seen, for trial 1 the control group was already highly accepted but also the pol-none and the ben-pol strategies were rated as very acceptable, polite and trustworthy. Significant group differences between conditions were found with contrast testing for trial 1: strategies that fulfilled the human politeness scheme (pol-ben, ben-pol, pol-com, pol-ben), were rated as more acceptable (ANOVA: \(F(9,56) = 3.7, p < .001\); contrast: \(t(47) = 3.4, p < .001\), Cohen’s \(d = 0.99\)), politer (ANOVA: \(F(9,56) = 3.1, p < .01\); contrast: \(t(47) = 1.8, p < .05\), Cohen’s \(d = 0.53\)) and more trustworthy (ANOVA: \(F(9,56) = 2.5, p < .05\); contrast: \(t(16) = 2.3, p < .05\), Cohen’s \(d = 1.15\), df corrected for unequal variances) than the com-pol and com-ben strategies. Regarding participants’ fear, no significant differences occurred between conditions or trials, but descriptively, the com-ben strategy had the highest average fear ratings. However, no group differences were significant in trial 2 for neither dependent variable. Still, variance in participant’s ratings seemed to decrease and the ratings of the strategies containing a command slightly increase.
Summarizing, regarding H2 the robot was more accepted, perceived as politer and trustworthier if it used strategies that fulfilled the human politeness scheme, but this effect was only significant in trial 1.
5.3 Effects of Adaptive Robot Behaviour
RQ1 asked whether it would be beneficial for a robot to apply an adaptive request behaviour similar to human request making.
Overall, the positive effect of adaptive robot behaviour can be seen in Fig. 7, trial 1 (top): participants that complied with the first request (pol-none and ben-none, com-none), descriptively perceived the robot more positively regarding acceptance, politeness and trust than the ones who complied after the second request. Contrast testing revealed a significant difference between the pol-none and the pol-com strategy for acceptance (ANOVA: \(F(9,56) = 3.69, p < .001\); contrast: \(t(47) = 2.4, p < .05\), Cohen’s \(d = 0.7\)) and politeness (ANOVA: \(F(9,56) = 3.07, p < .01\); contrast: \(t(47) = 3.0, p < .01\), Cohen’s \(d = 0.88\)). Hence, participants that only heard an appeal and then complied, rated the robot as more acceptable and polite than the ones who did not comply and received a command as second request. No differences occurred for trust or the comparison between ben-none and ben-com. No group differences were significant in trial 2.
5.4 Influences on User Compliance and Strategy Acceptance
5.4.1 Predictors of User Compliance
RQ2 asked which user characteristics would influence user compliance and strategy evaluation.
An extreme group comparison was performed to find user characteristics that determine compliance behaviour. The sample was divided into a high compliance group (n = 24, ‘compliance after first request’) and a low compliance group (n = 15, ‘no compliance’). The low compliance group consisted significantly of more males (\(\chi ^2(2) = 10.08, p < .01\)) and individuals with robot experience (\(\chi ^2(2) = 5.11, p < .05\)). The results hint at the possibility that gender and robot experience might influence user compliance to a robot’s request.
5.4.2 Predictors of Strategy Acceptance
A step-wise linear regression was performed to find relevant personality traits that predicted strategy acceptance per trial. Regression models for each trial with significant predictors are shown in Table 5. Participant’s neuroticism ratings could significantly predict strategy acceptance in trial 1 and explain 17% of variance in the acceptance ratings. As the predictor ‘neuroticism’ had a negative sign, the relationship between neuroticism and strategy acceptance was assumed to be negative. This means that acceptance of the robot’s strategy was less likely in trial 1 if the participant scored high on Neuroticism according to the BIG5.
In trial 2, strategy acceptance was predicted by Neuroticism and NARS, which together explained a quarter of the variance in acceptance (25%). Since both predictors had negative signs, strategy acceptance was less likely if the person scored high on Neuroticism according to the BIG5 and had negative attitudes towards robots according to the NARS questionnaire. To sum up, acceptance of the robot’s strategy was less likely in both trials if the participant scored high on Neuroticism according to the BIG5. For trial 2, strategy acceptance was also less likely if the person had negative attitudes towards robots.
5.5 Qualitative Findings
5.5.1 Hesitation During Compliance Decision
Concerning the compliance behaviour, 48% of participants said they hesitated in their decision during trial 1 but not during trial 2 and 21% reported to have been hesitant in both trials. This might indicate that participants did, indeed experience a conflict of interest as the decision to comply was not easy to reach.
5.5.2 Reasons for Compliance Decision
Participants were asked in the interview how they came to their decision to comply or not. About half of the participants based their compliance decision on either the cooperation benefit (18%), the cooperation cost (16%), or the prioritization of one of the tasks (19%). When explicitly asked whose tasks the participants considered more important, the majority (61%) said their own task was more important than the robot’s. Only 12% valued the robot’s task higher and 21% said that both tasks were equally important.
Additionally, thirteen percent named time pressure and six percent named the robot’s strategy as the reason for their decision. The latter reported having based their decision on the robot’s assertiveness and politeness as expressed by the strategy.
For example, for the pol-com strategy, a 20-year old female participant said: ‘Because he was so determined. [...] Otherwise, he’ll keep bugging me for another five minutes.’
An 18-year old female participant remarked for the com-pol strategy: ‘The first time, I intended to step aside immediately. But then it irritated me because the robot was not friendly enough. The second time the request was a little bit more friendly and then I stepped aside right away.’
To summarize, half of the participants hesitated in their decision to comply in trial 1 but not in trial 2. The majority of participants reported having based their compliance decision on weighing the costs and benefits of cooperation and considered their task more important than the robot’s.
5.6 Summary of Results
-
Conflict resolution strategies that followed human politeness norms were more accepted, politer and trustworthier than strategies entailing a command during trial 1 but not trial 2.
-
Robot conflict resolution strategy achieved compliance rates between 40 and 60%.
-
Adapting the strategies to participants’ compliance behaviour seemed to be beneficial for polite requests followed by a command.
-
Neuroticism and negative attitudes towards robots were negatively associated with participants’ strategy acceptance.
-
Males and participants with prior robot experience were less likely to comply.
-
Half of the participants hesitated in their decision to comply in trial 1 but not in trial 2
-
The majority of participants considered their task more important than the robot’s.
6 Discussion
The aim of this study was to investigate how an autonomous service robot can assert itself when making a request in a human-robot conflict situation. Hereby, different combinations of assertive and polite request elements (appeal, showing benefit, command) applied by a humanoid service robot were tested for user compliance and acceptance. It was assumed that sequences of conflict resolution strategies that fulfilled human politeness norms (e.g., polite request first) would be effective (H1) and at the same time more acceptable, politer and trustworthier (H2) than unusual ones.
Concerning H1, only trends were visible. Strategies containing a command led to more compliance than strategies based on politeness. However, no significant differences occurred between the strategies. Hence, H1 cannot be assumed. In general, user compliance rates ranged between 40 and 60% which is in line with previous HRI studies [8, 9]. The compliance rates in the present study are also similar to human-human request-making. In social psychology experiments, similar compliance rates for polite requests (70%) and commands (31%) were found [106, 107]. Consequently, a robot applying the request strategies was found to be similarly effective than a human requester.
However, most participants indicated in the interview that they considered their own task to be more important than the robot’s. This might represent the effect of the human-robot power asymmetry which seems to be of relevance in the domestic context (e.g., “Why should a robot owner accept orders from its robot?”). This is in line with our previous study [9] where participants were reluctant to follow a robot’s request in the domestic context compared to the public context. This shows the difficulty of developing effective robot conflict resolution strategies for the domestic context and highlights the need for future research in this HRI area. It will be of particular interest to see if this attitude of superiority changes over time when the robot owner realises that the conflict sometimes might be more effectively solved if s/he complies with the robot. Hereby, considering the Theory of Planned Behaviour [108, 109], the strategy evaluation might influence the compliance decision via attitude formation and intention to comply.
Regarding H2, strategies that adhered to human politeness norms were indeed more accepted and rated as politer and trustworthier than those contradicting it. Hereby, first uttering a command and then showing the benefit of cooperation was rated as the least acceptable, rudest and least trustworthy strategy compared to the control group and descriptively as the most fearsome robot behaviour. However, the differences between the strategies only occurred for trial 1 and not trial 2. Concluding, H2 can only be partly assumed for the initial interaction with an assertive robot. That effects were not found for trial 2 might hint to a novelty effect of the interaction. This assumption is also supported by the declining strategy intensity ratings for trial 2 and the reported participant’s hesitation in trial 1 but not in trial 2. Also the evaluations for the strategies contradicting human politeness norms became more positive in trial 2 which might reflect habituation to the robot’s requests. Novelty effects are common in HRI experiments especially if participants have limited prior robot experience [58, 110, 111]. Therefore, it could be beneficial to investigate the presented strategies further with a long-term study design to see whether the results are due to the novelty effect of the study or if it represents genuine insecurity in the initial interactions with an assertive robot and would imply designing conflict resolution strategies differently based on the frequency of interaction.
Two research questions were investigated in addition to the hypotheses. RQ1 investigated whether it would be beneficial to adapt the robot’s conflict resolution strategies to the participant’s compliance behaviour. Participants that already complied to the first request should not be deterred by a second (potentially more assertive) request. It was found that participants who only heard an appeal and then complied, rated the robot as more acceptable and polite than the ones who did comply after the second request. The other comparison between ben-none and ben-com did not reach significance. Potentially, for the pol-com strategy the contrast between both request elements was larger than for ben-com and hence the difference was more pronounced. Due to the limitations of the between-subjects design in the present study, this result can only be regarded as preliminary support for adaptive robot behaviour: it might be argued that this difference might have been found because the subjects who complied after the first request accepted the robot more in the first place than those who did not. However, as no differences in acceptance or trust levels between the groups before the interaction were found (see Appendix, Table 3) this did not seem the case. Nevertheless, other pre-existent differences between participants that were not assessed in the study might have influenced their decision whether or not to comply with the first request. Based on knowledge from social psychology which personality traits might influence compliance decisions, specific personality traits such as assertiveness [112], submissiveness [113] or a tendency for pro-social behaviour [114] could be investigated in future studies. Therefore, future studies investigating the potential benefit of adaptive conflict resolution strategies may apply a within-subjects design. For example, the same participant could receive both strategies (single request and adaptive request) which could then be compared. Additionally, as with the other reported strategy differences, the potential benefits of adaptive behaviour could only be seen in trial 1 but not in trial 2. Therefore, long-term studies are needed to determine if the robot’s adaptive behaviour is still beneficial after repeated interaction or if its behaviour becomes predictable and adaptivity becomes obsolete.
RQ2 investigated potential influencing factors on strategy acceptance and compliance. Regarding influences on the compliance decision, an extreme group comparison was performed and it was found that the low compliance group consisted of more males and individuals with robot experience. Although gender and robot experience are common influences on HRI [60, 115, 116], further studies are needed to replicate this finding and explore what exact feature of those two broad user characteristics does influence the compliance to a robot’s request (e.g., assertiveness as user trait, experience with humanoid robots).
Acceptance of the robot’s strategy was lower for individuals reporting to have more negative attitudes towards robots and being more neurotic according to the Big5 personality model. This has also been found in previous studies for Neuroticism [72, 82] and NARS [116, 117]. Neuroticism and NARS have also been shown to be positively associated with each other [118]. Users scoring high on both also tend to keep more distance to robots [6, 119] and prefer mechanical-looking robots [82]. As mentioned in the discussion regarding RQ1, other more specific personality traits regarding cooperation such as assertiveness [112], submissiveness [113] or a tendency for pro-social behaviour [114] could be investigated in further studies. This knowledge about individual differences in reactions to human-robot conflicts could then be used to personalize robot conflict resolution strategies (e.g., different strategies based on user dominance/submissiveness or robot experience) to increase the acceptance of assertive robots.
Taken together, the presented study found that during initial HRI conflict resolution strategies which followed human politeness norms were more accepted, politer and trustworthier than strategies that contradicted it. Overall, compliance rates were comparable to human-human-requests but declined to the second trial. Therefore, long-term studies are needed to determine if the robot’s conflict resolution strategy can influence the decision to comply with a robot’s request over the long term.
6.1 HRI Design Implications
To make recommendations on how robot assertiveness might be implemented in an acceptable way for a humanoid domestic service robot, we first summarize the results of the user study in a list and then discuss implications and make recommendations for each strategy.
Considered appropriate but declined in effectiveness:
-
1.
First utter a polite request and then persuade by showing benefits (pol-ben)
-
2.
First persuade by showing benefits and then utter a polite request (ben-pol)
-
3.
First persuade by showing benefits and then command (ben-com)
Potentially effective but not considered appropriate:
-
4.
First utter a polite request and then command (pol-com)
Not considered appropriate and not advisable to use:
-
5.
First command and then utter a polite request (com-pol)
-
6.
First command and then persuade by showing benefits (com-ben)
Summarizing, strategies 1–3 seem to be interesting for future research as they were accepted and trusted. A particular interest for future research could be to identify factors that make them effective in repeated interactions (e.g., robot type, user personality, application context).
Additionally, the found influences of user characteristics on acceptance and compliance suggest that personalizing the robot’s conflict resolution behaviour and assertiveness based on user preferences could be useful. This has also been suggested for persuasive technology [52, 120]. This would also allow the user to adjust the settings if they have experienced in the long-term that their robot is ineffective when it does not assert itself in some situations at home. Strategies 5 and 6 do not seem promising for real-world application as they were neither accepted nor effective. However, as strategy 6 was rated as the most fearsome strategy, further research could determine whether this strategy, if voiced by a different robot (e.g., smaller, less human-like), would be considered less frightening.
Finally, strategy 4 needs special mentioning concerning real-world applications. It produced opposite effects regarding acceptance and compliance: although the participants did not accept it, they complied in both trials 10 to 20% more than the control group. Strategy 4 might be further investigated for emergencies where compliance might be valued higher than user acceptance. For example, if the robot interrupts household members in their task to alert them to an emergency, they will not mind that the robot used a command after a polite request. The advantages and disadvantages of this strategy might be addressed in further research paying close attention to its ethical implications.
Overall, the results indicate possible challenges of designing robot conflict resolution strategies that are simultaneously polite, acceptable, trustworthy and effective. This also includes ethical considerations regarding the desirable level of effectiveness of robot requests. Hereby, the compliance rates shown in human-human request making can serve as benchmarks (70% for polite requests [106, 107]). A higher compliance rate (e.g., 90-100%) to a domestic service robot in a non-emergency situation should be considered as undesirable [24, 25, 75].
6.2 Strengths, Limitations, and Future Work
This study systematically investigated the effect of adaptive conflict resolution strategies on user compliance and acceptance. The conflict scenario provided the opportunity to use standardized stimuli and procedures (e.g., the robot always appeared when nine items were left). It produced a conflict with mutually exclusive goals (either the robot’s task or the participant’s task could be performed). The reported participant’s hesitation when making the compliance decision might indicate that the scenario managed to produce a human-robot goal conflict that was not easy to solve. The manipulation checks also revealed that most participants seemed to perceive the requests as intended and could reproduce what the robot had said.
As the study was performed in VR, the robot’s behaviour was standardized across participants, and the VR environment seemed to be perceived as very immersive. However, as previous VR studies in HRI conclude [86, 121], interactive VR experiments should be complemented with live robot experiments to validate results. First studies comparing lab and VR studies in HRI have indeed found similar results (for an overview, see [29]). However, the presented study would still benefit from a live robot experiment, especially regarding robot embodiment’s effects on user compliance.
Nevertheless, the study has some limitations that need to be considered when interpreting the results. The testing of adaptive strategies had the advantage of producing more human-like robot conflict resolution behaviour but also led to small group sizes (e.g., participants who only heard the first request), which reduced statistical power. Especially for the conditions pol-ben (trial 1) and ben-com (both trials), it has to be noted that all participants in the groups did not hear a second request as they all either complied with the first request or did not comply. Hence, it has to be noted that these results represent the single effect of either an appeal or a command. Moreover, for the extreme group comparison of compliance groups, limitations of this statistical procedure have to be considered (e.g., variance limitations ,[122]) and the results should be validated in further samples. Furthermore, the sample comprised mainly of students, which limits the generalization of results. Future sample selection and recruiting should also consider that social rules for assertiveness and politeness are shaped by culture [22]. In European countries, an assertive robot might be acceptable, but in Asian countries, it might be deemed unacceptable and rude. This has been demonstrated for Germans and Chinese regarding assertive communication strategies of a small autonomous delivery robot [123]. Future studies are needed with larger, more diverse samples from different cultures.
Likewise, a restriction regarding the NARS questionnaire’s subscale S2 has to be mentioned. Although the NARS is widely used in HRI research (e.g., [124,125,126]), it has some limitations such as a low reliability of subscales (especially S2 ‘Social Influence of Robots’) in European populations [124]. Therefore, the whole scale was used for subsequent analyses.
Moreover, repeating the study with different robot types (e.g., androids, mechanoids) should be considered, as the robot type is likely to influence the acceptability of conflict resolution strategies as it has done in previous studies [9]. Likewise, the modality of conflict resolution strategies has to be considered in future studies as the tested strategies were verbal utterances that might not be feasible for every robot type (e.g., mechanoid robots), as has been shown in [9]. Future studies could therefore develop non-verbal conflict resolution strategies for assertive domestic robots, which could, for example, indicate by different approach speeds, sounds, or projections that they would like to continue their task. In contrast, for humanoid robots, it could be tested whether the imitation of human body language (e.g., gestures, movements) or facial expressions (e.g., smiling, eye gazes) during requests could be beneficial for acceptance and compliance [127,128,129]. The non-verbal communication could attenuate the impressions of the respective requests. For example, for the command it could be tested if the robot would be perceived as more assertive and persuasive if it crossed its arms before its chest (e.g., [130]) like a human requester would do [128, 131].
Additionally, the conflict scenario might benefit from improvements. As time pressure in the sorting task was named as a reason for the compliance decision, it does not seem advisable to use it further. Based on the participant’s feedback, it also needs to be better justified why the robot cannot clean around the participant first before requesting the user to step aside (e.g., battery running low).
Naturally, to keep the study design reasonable, only three conflict resolution strategies were selected for testing. Given human behaviour’s versatility, different combinations of conflict resolution strategies are conceivable, which might also be effective and accepted. For instance, future studies could focus on sequential-request compliance techniques known from social psychology like the foot-in-the-door technique (successfully applied for persuasive robots by [132]) to find acceptable and effective robot conflict resolution strategies.
7 Conclusion
This VR study examined six adaptive conflict resolution strategies for a humanoid service robot to solve human-robot goal conflicts in the domestic context. The strategies consisted of combinations of assertive and polite request elements (appeal, showing benefit, command). They were adapted to the users’ compliance behaviour (i.e., the robot did not utter a second request if the participant had already complied). Based on human polite behaviour norms, it was expected that robot conflict resolution strategies would need to follow human politeness norms to be effective, acceptable, and trustworthy. The results showed that strategies considering human politeness and conflict resolution norms were more accepted, polite, and trustworthier but not more effective. Additionally, the differences were only found for the initial interaction with the robot in trial 1. No differences in compliance were found between strategies, but male participants with robot experience were less likely to comply. The studies’ contribution is also methodological as an experimental paradigm to investigate human-robot goal conflicts in a virtual environment is introduced. This study represents a first step toward developing assertive conflict resolution strategies for humanoid domestic service robots and provides insights for future research such as long-term studies to investigate the results further.
Data Availability Statement
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
Notes
Abbreviations: appeal = ’pol’, showing benefit = ’ben’ and command = ’com’. The strategies were named after the combination of elements: e.g., pol-ben is the combination of a polite appeal as first request and showing the benefit of cooperation as second request.
References
Savela N, Turja T, Oksanen A (2018) Social acceptance of robots in different occupational fields: a systematic literature review. Int J So Robot 10(4):493–502. https://doi.org/10.1007/s12369-017-0452-5
De Graaf MMA, Allouch SB (2013) Exploring influencing variables for the acceptance of social robots. Robot Auton Syst 61(12):1476–1486. https://doi.org/10.1016/j.robot.2013.07.007
Sung J, Grinter RE, Christensen HI (2010) Domestic robot ecology. Int J Soc Robot 2(4):417–429
Jarrassé N, Sanguineti V, Burdet E (2014) Slaves no longer: review on role assignment for human-robot joint motor action. Adapt Behav 22(1):70–82. https://doi.org/10.1177/1059712313481044
Matthews G, Lin J, Panganiban AR, Long MD (2020) Individual differences in trust in autonomous robots: implications for transparency. IEEE Trans Human-Machine Syst 50(3):234–244. https://doi.org/10.1109/THMS.2019.2947592
Takayama L, Groom V, Nass C (2009) I’m Sorry, Dave: i’m afraid i won’t do that: social aspects of human-agent conflict. Conference on human factors in computing systems - proceedings of CHI 2009:2099–2107. https://doi.org/10.1145/1518701.1519021http://portal.acm.org/citation.cfm?id=1519021
Thomas J, Vaughan R (2019) Right of way, assertiveness and social recognition in human-robot doorway interaction. In: IEEE international conference on intelligent robots and systems, pp 333–339, https://doi.org/10.1109/IROS40897.2019.8967862
Thomas J, Vaughan R (2018) After You: doorway negotiation for human-robot and robot-robot interaction. In: IEEE international conference on intelligent robots and systems, pp 3387–3394, https://doi.org/10.1109/IROS.2018.8594034
Babel F, Kraus JM, Baumann M (2021) Development and testing of psychological conflict resolution strategies for assertive robots to resolve human-robot goal conflict. frontiers in robotics and AI 7(January), https://doi.org/10.3389/frobt.2020.591448
Ray C, Mondada F, Siegwart R (2008) What do people expect from robots? In: 2008 IEEE/RSJ Int Conf Intell Robot Syst, pp 3816–3821, https://doi.org/10.1109/IROS.2008.4650714
Vollmer AL (2018) Fears of Intelligent Robots. In: Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction - HRI ’18, pp 273–274, https://doi.org/10.1145/3173386.3177067, http://dl.acm.org/citation.cfm?doid=3173386.3177067
Ziefle M, Valdez AC (2017) Domestic robots for homecare: a technology acceptance perspective. Lecture notes in computer science 10297 LNCS:57–74, https://doi.org/10.1007/978-3-319-58530-7_5
Goetz J, Kiesler S, Powers A (2003) Matching robot appearance and behavior to tasks to improve human-robot cooperation. In: The 12th IEEE international workshop on robot and human interactive communication. Proceedings. ROMAN 2003., IEEE, pp 55–60, https://doi.org/10.1109/ROMAN.2003.1251796
Groom V, Nass C (2007) Can robots be teammates? Benchmarks in human-robot teams. Interact Stud 8(3):483–500. https://doi.org/10.1075/is.8.3.10gro
Lee JJ, Knox WB, Wormwood JB, Breazeal C, DeSteno D (2013) Computationally modeling interpersonal trust. frontiers in psychology 4(DEC), https://doi.org/10.3389/fpsyg.2013.00893
Reeves B, Nass CI (1996) The media equation: how people treat computers, television, and new media like real people and places. Cambridge University Press, Cambridge
Rahim MA (1992) Managing conflict in organizations. In: Fenn P, Gameson R (eds) Proc First Int Constr Manag Conf Univ Manchester Inst Sci Technol, E & F N Spon, pp 386–395
Pfafman T (2017) Assertiveness. In: Zeigler-Hill, V, Shackelford T (eds) Encyclopedia of Personality and Individual Differences, Springer International Publishing, https://doi.org/10.1007/978-3-319-28099-8_1044-1
Brett J, Thompson L (2016) Negotiation. Org Behav Human Decis Process 136:68–79. https://doi.org/10.1016/J.OBHDP.2016.06.003
Kobberholm KW, Carstens KS, Bøg LW, Santos MHA, Ramskov S, Mohamed SA, Jensen LC (2020) The Influence of Incremental Information Presentation on the Persuasiveness of a Robot. In: HRI ’20: Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, Association for Computing Machinery (ACM), pp 302–304, https://doi.org/10.1145/3371382.3378338
Yamamoto Y, Sato M, Hiraki K, Yamasaki N, Anzai Y (1992) A request of the robot: an experiment with the human-robot interactive system HuRIS. Proceedings IEEE international workshop on robot and human communication, ROMAN 1992:204–209. https://doi.org/10.1109/ROMAN.1992.253887
Lee N, Kim J, Kim E, Kwon O (2017) The influence of politeness behavior on user compliance with social robots in a healthcare service setting. Int J Soc Robot 9(5):727–743. https://doi.org/10.1007/s12369-017-0420-0
Lee Y, Bae JE, Kwak SS, Kim MS (2011) The effect of politeness strategy on human - robot collaborative interaction on malfunction of robot vacuum cleaner. In: RSS’11 (Robotics Sci Syst Work Human-Robot Interact)
Roubroeks MAJ, Ham JRC, Midden CJH (2010) The Dominant robot: threatening robots cause psychological reactance, especially when they have incongruent goals. In: International conference on persuasive technology, Springer Heidelberg, pp 174–184, https://doi.org/10.1007/978-3-642-13226-1_18
Cormier D, Newman G, Nakane M, Young JE, Durocher S (2013) Would You Do as a Robot Commands? An Obedience Study for Human-Robot Interaction. In: International Conference on Human-Agent Interaction
Adair WL, Brett JM (2005) The negotiation dance: time, culture, and behavioral sequences in negotiation. Org Sci 16(1):33–51. https://doi.org/10.1287/orsc.1040.0102
Preuss M, van der Wijst P (2017) A phase-specific analysis of negotiation styles. J Bus Ind Mark 32(4):505–518. https://doi.org/10.1108/JBIM-01-2016-0010
Mara M, Meyer K, Heiml M, Pichler H, Haring R, Krenn B, Gross S, Reiterer B, Layer-Wagner T (2021) Cobot Studio VR: a virtual reality game environment for transdisciplinary research on interpretability and trust in human-robot collaboration. In: VAM-HRI 2021, March, 2021, Boulder, Colorado USA
Sadka O, Giron J, Friedman D, Zuckerman O, Erel H (2020) Virtual-reality as a simulation tool for non-humanoid social robots. In: Ext Abstr 2020 CHI Conf Hum Factors Comput Syst, (ACM), New York, NY, USA, pp 1–9, https://doi.org/10.1145/3334480.3382893, https://dl.acm.org/doi/10.1145/3334480.3382893
Duguleana M, Barbuceanu FG, Mogan G (2011) Evaluating human-robot interaction during a manipulation experiment conducted in immersive virtual reality. In: Lecture notes in computer science, Springer, Berlin, Heidelberg, pp 164–173, https://doi.org/10.1007/978-3-642-22021-0_19, http://link.springer.com/10.1007/978-3-642-22021-0_19
Matsas E, Vosniakos GC, Batras D (2017) Effectiveness and acceptability of a virtual environment for assessing human-robot collaboration in manufacturing. Int J Adv Manuf Technol 92(9–12):3903–3917
Mara M, Stein JP, Latoschik ME, Lugrin B, Schreiner C, Hostettler R, Appel M (2021) User responses to a humanoid robot observed in real life, virtual reality, 3D and 2D. Front Psychol. https://doi.org/10.3389/FPSYG.2021.633178/FULL
Vorauer JD, Claude SDD (1998) Perceived versus actual transparency of goals in negotiation. Personal Soc Psychol Bull 24(4):371–385. https://doi.org/10.1177/0146167298244004
Hüffmeier J, Freund PA, Zerres A, Backhaus K, Hertel G (2014) Being tough or being nice? A meta-analysis on the impact of hard- and softline strategies in distributive negotiations. J Manage 40(3):866–892. https://doi.org/10.1177/0149206311423788
Yanco HA, Drury J (2004) Classifying human-robot interaction: an updated taxonomy. In: 2004 IEEE international conference on systems, man and cybernetics, IEEE, vol 3, pp 2841–2846, https://doi.org/10.1109/ICSMC.2004.1400763
Drury JL, Scholtz J, Yanco HA (2003) Awareness in human-robot interactions. In: SMC’03 conference proceedings. 2003 IEEE international conference on systems, man and cybernetics. Conference Theme-System Security and Assurance, IEEE, vol 1, pp 912–918, https://doi.org/10.1109/icsmc.2003.1243931
Endsley MR (1995) Toward a theory of situation awareness in dynamic systems. Human Factors 37(1):32–64. https://doi.org/10.1518/001872095779049543
Baumann M, Krems J (2009) A comprehension based cognitive model of situation awareness. In: Duffy V (ed) Int Conf Digit Hum Model, Springer, San Diego, CA, USA, July 19–24, pp 192–201, https://doi.org/10.1007/978-3-642-02809-0_21
Durso FT, Rawson KA, Girotto S (2007) Comprehension and Situation Awareness. In: Durso F, Nickerson R, Dumais S, Lewandowsky S, Perfect T (eds) Handb Appl Cogn vol 2, Wiley Amsterdam, pp 163–193, https://doi.org/10.1002/9780470713181.ch7
Lee MK, Kiesler S, Forlizzi J, Srinivasa S, Rybski P (2010) Gracefully mitigating breakdowns in robotic services. In: 2010 5th ACM/IEEE international conference on human-robot interaction (HRI), Institute of Electrical and Electronics Engineers (IEEE), pp 203–210, https://doi.org/10.1109/hri.2010.5453195
Stange S, Kopp S (2020) Effects of a social robot’s self-explanations on how humans understand and evaluate its behavior. In: ACM/IEEE Int Conf Human-Robot Interact, IEEE Computer Society, pp 619–627, https://doi.org/10.1145/3319502.3374802
Fischer K, Soto B, Pantofaru C, Takayama L (2014) Initiating interactions in order to get help: effects of social framing on people’s responses to robots’ requests for assistance. In: The 23rd IEEE international symposium on robot and human interactive communication, IEEE, pp 999–1005, https://doi.org/10.1109/ROMAN.2014.6926383
Backhaus N, Rosen PH, Scheidig A, Gross HM, Wischniewski S (2019) Somebody Help Me, Please?!’ interaction design framework for needy mobile service robots. In: Proc IEEE Work Adv Robot its Soc Impacts, ARSO, vol 2018, pp 54–61, https://doi.org/10.1109/ARSO.2018.8625721
Locher MA, Watts RJ (2008) Relational work and impoliteness: negotiating norms of linguistic behaviour. Impoliteness in language: studies on its interplay with power in theory and practice, De Gruyter, chap Chapter 4:77–100. https://doi.org/10.1515/9783110208344.2.77
Brown P, Levinson SC (1987) Politeness: some universals in language usage. Cambridge University Press, Cambridge
Baxter L (1984) An investigation of compliance-gaining as politeness. Hum Commun Res 10(3):427–456. https://doi.org/10.1111/j.1468-2958.1984.tb00026.x
Salem M, Ziadee M, Sakr M (2013) Effects of politeness and interaction context on perception and experience of HRI. In: International conference on social robotics, Springer, pp 531–541, https://doi.org/10.1007/978-3-319-02675-6_53
Forgas JP (1998) Asking Nicely? The effects of mood on responding to more or less polite requests. Personal Soc Psychol Bull 24(2):173–185. https://doi.org/10.1177/0146167298242006
Blum-Kulka S (1987) Indirectness and politeness in requests: Same or different? J Pragmat 11(2):131–146. https://doi.org/10.1016/0378-2166(87)90192-5
Nass C (2004) Exhibitions and expectations of computer politeness. Commun ACM 47(4):35–37. https://doi.org/10.1145/975817.975841
Nomura T, Saeki K (2010) Eeffects of polite behaviors expressed by robots: a psychological experiment in Japan. Int J Synth Emot (IJSE) 1(2):38–52. https://doi.org/10.4018/jse.2010070103
Fogg B (2002) Persuasive technology: using computers to change what we think and do (Interactive Technologies). Morgan Kaufmann. https://doi.org/10.1145/764008.763957
Hamari J, Koivisto J, Pakkanen T (2014) Do persuasive technologies persuade?-A review of empirical studies. In: Int Conf Persuas Technol, Springer, pp 118–136, https://doi.org/10.1007/978-3-319-07127-5_11
Siegel M, Breazeal C, Norton MI (2009) Persuasive robotics: the influence of robot gender on human behavior. In: Proceedings of the IEEE/RSJ international conference on intelligent robots and systems, IROS 2009, pp 2563–2568, https://doi.org/10.1109/IROS.2009.5354116, https://ieeexplore.ieee.org/abstract/document/5354116/
Ghazali AS, Ham J, Barakova E, Markopoulos P (2020) Persuasive robots acceptance model (PRAM): roles of social responses within the acceptance model of persuasive robots. Int J Soc Robot 12(5):1075–1092. https://doi.org/10.1007/s12369-019-00611-1
Ham J, Midden CJH (2014) A persuasive robot to stimulate energy conservation: the influence of positive and negative social feedback and task similarity on energy-consumption behavior. Int J Soc Robot 6(2):163–171. https://doi.org/10.1007/s12369-013-0205-z
Kamei K, Shinozawa K, Ikeda T, Utsumi A, Miyashita T, Hagita N (2010) Recommendation from robots in a real-world retail shop. In: International conference on multimodal interfaces and the workshop on machine learning for multimodal interaction, ICMI-MLMI 2010, https://doi.org/10.1145/1891903.1891929
Saunderson S, Nejat G (2019) How robots influence humans: a survey of nonverbal communication in social human-robot interaction. Int J Soc Robot 11(4):575–608. https://doi.org/10.1007/s12369-019-00523-0
Torrey C, Fussell SR, Kiesler S (2013) How a Robot Should Give Advice. In: Proceedings of the ACM/IEEE international conference on human-robot interaction - HRI’13, pp 275–282, https://doi.org/10.1109/HRI.2013.6483599
Paradeda R, Ferreira MJ, Oliveira R, Martinho C, Paiva A (2019) What makes a good robotic advisor? The role of assertiveness in human-robot interaction. In: Lect Notes Comput Sci, Springer, vol 11876, pp 144–154, https://doi.org/10.1007/978-3-030-35888-4_14
Chidambaram V, Chiang YH, Mutlu B (2012) Designing persuasive robots. In: Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction, IEEE, pp 293–300, https://doi.org/10.1145/2157689.2157798
Xin M, Sharli E (2007) Playing games with robots - a method for evaluating human-robot interaction. In: Sarkar N (ed) Hum. Robot Interact., Itech Education and Publishing, p pp.522, https://doi.org/10.5772/5208
Rains SA (2013) The nature of psychological reactance revisited: a meta-analytic review. Hum Commun Res 39(1):47–73
Ghazali AS, Ham J, Barakova E, Markopoulos P (2018) The influence of social cues in persuasive social robots on psychological reactance and compliance. Comput Human Behav 87:58–65. https://doi.org/10.1016/j.chb.2018.05.016
Jenkins M, Dragojevic M (2013) Explaining the process of resistance to persuasion: a politeness theory-based approach. Commun Res 40(4):559–590. https://doi.org/10.1177/0093650211420136
Dillard JP (1991) The current status of research on sequential-request compliance techniques. Personal Soc Psychol Bull 17(3):283–288. https://doi.org/10.1177/0146167291173008
Inbar O, Meyer J (2015) Manners matter: trust in robotic peacekeepers. In: Proceedings of the human factors and ergonomics society, human factors and ergonomics society, pp 185–189, https://doi.org/10.1177/1541931215591038, https://journals.sagepub.com/doi/abs/10.1177/1541931215591038
Zhu B, Kaber D (2012) Effects of etiquette strategy on human-robot interaction in a simulated medicine delivery task. Intell Serv Robot 5(3):199–210
Castro-González Á, Castillo JC, Alonso-Martín F, Olortegui-Ortega OV, González-Pacheco V, Malfaz M, Salichs MA (2016) The effects of an impolite vs. a polite robot playing rock-paper-scissors. Lecture notes in computer science 9979:306–316. https://doi.org/10.1007/978-3-319-47437-3_30
Srinivasan V, Takayama L (2016) Help me please: robot politeness strategies for soliciting help from people. In: Proceedings of the 2016 CHI conference on human factors in computing systems - CHI ’16, pp 4945–4955, https://doi.org/10.1145/2858036.2858217
Danescu-Niculescu-Mizil C, Sudhof M, Dan J, Leskovec J, Potts C (2013) A computational approach to politeness with application to social factors. In: ACL 2013 - 51st Annu Meet Assoc Comput Linguist Proc Conf, vol 1, pp 250–259. arxiv:1306.6078
Salem M, Lakatos G, Amirabdollahian F, Dautenhahn K (2015) Would You Trust a (Faulty) Robot? Effects of error, task type and personality on human-robot cooperation and trust. In: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction - HRI ’15, pp 141–148, https://doi.org/10.1145/2696454.2696497
Miller CH, Lane LT, Deatrick LM, Young AM, Potts KA (2007) Psychological reactance and promotional health messages: the effects of controlling language, lexical concreteness, and the restoration of freedom. Hum Commun Res 33(2):219–240. https://doi.org/10.1111/j.1468-2958.2007.00297.x
Christenson AM, Buchanan JA, Houlihan D, Wanzek M (2011) Command use and compliance in staff communication with elderly residents of long-term care facilities. Behav Ther 42(1):47–58. https://doi.org/10.1016/j.beth.2010.07.001
Geiskkovitch DY, Cormier D, Seo SH, Young JE (2016) Please continue, we need more data: an exploration of obedience to robots. J Human-Robot Interact 5(1):82–99. https://doi.org/10.5898/10.5898/jhri.5.1.geiskkovitch
Strait M, Canning C, Scheutz M (2014) Let Me Tell You! investigating the effects of robot communication strategies in advice-giving situations based on robot appearance, interaction modality and distance. In: Proceedings of the 2014 ACM/IEEE international conference on human-robot interaction (HRI’14), pp 479–486, https://doi.org/10.1145/2559636.2559670
Rea DJ, Schneider S, Kanda T (2021) ”Is This All You Can Do? Harder!”: the effects of (Im)Polite Robot encouragement on exercise effort. In: Proc. 2021 ACM/IEEE Int. Conf. Human-Robot Interact. (HRI ’21), March 8-11, 2021, Boulder, CO, USA, ACM, New York, NY, USA, pp 225–233, https://doi.org/10.1145/3434073.3444660
Tversky A, Kahneman D (1989) Rational choice and the framing of decisions. In: Multiple criteria decision making and risk analysis using microcomputers, Springer, pp 81–126
Boardman AE, Greenberg DH, Vining AR, Weimer DL (2017) Cost-benefit analysis: concepts and practice. Cambridge University Press, Cambridge
Paramasivam S (2007) Managing disagreement while managing not to disagree: polite disagreement in negotiation discourse. J Intercult Commun Res 36(2):91–116. https://doi.org/10.1080/17475750701478661
Strait M, Briggs P, Scheutz M (2015) Gender, more so than age, modulates positive perceptions of language-based human-robot interactions. In: Salem M, Weiss A, Baxter P, Dautenhahn K (eds) 4th international symposium on new frontiers in human-robot interaction. Canterbury, UK
Robert L (2018) Personality in the human robot interaction literature: a review and brief critique. In: Proceedings of the 24th Americas Conference on Information Systems, pp 2–10
Robert L, Alahmad R, Esterwood C, Kim S, You S, Zhang Q (2020) A review of personality in human-robot interactions. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3308191
Nomura T, Kanda T, Suzuki T, Kato K (2008) Prediction of human behavior in human - robot interaction using psychological scales for anxiety and negative attitudes toward robots. IEEE Trans Robot 24(2):442–451. https://doi.org/10.1109/TRO.2007.914004
Bartneck C, Suzuki T, Kanda T, Nomura T (2007) The influence of people’s culture and prior experiences with AIBO on their attitude towards robots. AI Soc 21(1):217–230. https://doi.org/10.1007/s00146-006-0052-7
van den Brule R, Dotsch R, Bijlstra G, Wigboldus DHJ, Haselager P (2014) Do robot performance and behavioral style affect human trust? Int J Soc Robot 6(4):519–531. https://doi.org/10.1007/s12369-014-0231-5
Kamezaki M, Kobayashi A, Yokoyama Y, Yanagawa H, Shrestha M, Sugano S (2019) A preliminary study of interactive navigation framework with situation-adaptive multimodal inducement: pass-by scenario. Int J Soc Robot. https://doi.org/10.1007/s12369-019-00574-3
Senft E, Satake S, Kanda T (2020) Would You Mind Me If I Pass By You? Socially-appropriate behaviour for an omni-based social robot in narrow environment. In: ACM/IEEE Int Conf Human-Robot Interact, IEEE Computer Society, New York, NY, USA, pp 539–547, https://doi.org/10.1145/3319502.3374812, https://dl.acm.org/doi/10.1145/3319502.3374812
Osborne MJ (2004) An introduction to game theory, vol 3. Oxford University Press, New York
Stuhlmacher AF, Gillespie TL, Champagne MV (1998) The impact of time pressure in negotiation: a meta-analysis. Int J Confl Manag 9(2):97–116
Hock P, Kraus J, Babel F, Walch M, Rukzio E, Baumann M (2018) How to design valid simulator studies for investigating user experience in automated driving: review and hands-on considerations. In: Proceedings of the 10th international conference on automotive user interfaces and interactive vehicular applications, association for computing machinery, New York, NY, USA, AutomotiveUI ’18, p 105–117, https://doi.org/10.1145/3239060.3239066
Costa PT, McCrae RR (1985) NEO Five Factor Inventory. Psychological Assessment Resources Inc, USA
Gilet A, Mella N, Studer J, Grühn D (2013) Assessing dispositional empathy in adults: a french validation of the interpersonal reactivity index (IRI). Can J Behav Sci 45(1):42–48
Rahim MA (1983) A measure of styles of handling interpersonal conflict. Acad Manag J 26(2):368–376
Bartneck C, Kulić D, Croft E, Zoghbi S (2009) Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int J Soc Robot 1(1):71–81. https://doi.org/10.1007/s12369-008-0001-3
Ho CC, MacDorman KF (2017) Measuring the uncanny valley effect: refinements to indices for perceived humanness, attractiveness, and eeriness. Int J Soc Robot 9(1):129–139. https://doi.org/10.1007/s12369-016-0380-9
Van Der Laan JD, Heino A, De Waard D (1997) A simple procedure for the assessment of acceptance of advanced transport telematics. Transp Res Part C Emerg Technol 5(1):1–10. https://doi.org/10.1016/S0968-090X(96)00025-3
Kraus JM (2020) Psychological Processes in the formation and calibration of trust in automation. Dissertation, Dissertation Ulm University. https://doi.org/10.18725/OPARU-32583
Vogt A, Babel F, Hock P, Baumann M, Seufert T (2021) Prompting in-depth learning in immersive virtual reality: impact of an elaboration prompt on developing a mental model. Comput Educ 171:1–15. https://doi.org/10.1016/j.compedu.2021.104235
Bradley MM, Lang PJ (1994) Measuring emotion: the self-assessment manikin and the semantic differential. J Behav Ther Exp Psych 25(1):49–59. https://doi.org/10.1016/0005-7916(94)90063-9
Kothgassner OD, Felnhofer A, Hauk N, Kastenhofer E, Gomm J, Kryspin-Exner I (2012) Technology Usage Inventory - Manual. ICARUS (Information- and Communication Technology Applications: Research on User-oriented Solutions), Wien
Jian JY, Bisantz AM, Drury CG (2000) Foundations for an empirically determined scale of trust in automated systems. Int J Cognit Ergon 4(1):53–71. https://doi.org/10.1207/S15327566IJCE0401_04
Cavusoglu B, Benbasat (2017) Information security policy compliance: an empirical study of rationality-based beliefs and information security awareness. MIS Q 34(3):523
Blanca JM, Alarcon R, Bono R, Bendayan R (2017) Non-normal data: Is ANOVA still a valid option? Psicothema 4(29):552–557. https://doi.org/10.7334/psicothema2016.383
Field A (2013) Discovering statistics using IBM SPSS statistics. Sage Publications, USA
Dolinska B, Dolinski D (2006) To command or to ask? Gender and effectiveness of tough vs soft compliance-gaining strategies. Soc Influence 1(1):48–57
Dolinski D (2015) Techniques of social influence: the psychology of gaining compliance. Taylor & Francis, UK
Ajzen I (1985) From intentions to actions: a theory of planned behavior. In: Action Control, Springer, pp 11–39, https://doi.org/10.1007/978-3-642-69746-3_2
Ajzen I (2011) The theory of planned behaviour: reactions and reflections. Psychol Heal 26(9):1113–1127. https://doi.org/10.1080/08870446.2011.613995
Smedegaard CV (2019) Reframing the role of novelty within social HRI: from Noise to Information. In: ACM/IEEE International Conference on Human-Robot Interaction (HRI‘19), IEEE, vol 2019-March, pp 411–420, https://doi.org/10.1109/HRI.2019.8673219
Gockley R, Bruce A, Forlizzi J, Michalowski M, Mundell A, Rosenthal S, Sellner B, Simmons R, Snipes K, Schultz AC, et al. (2005) Designing robots for long-term social interaction. In: Proceedings of the IEEE/RSJ international conference on intelligent robots and systems, IEEE, pp 1338–1343, https://doi.org/10.1109/IROS.2005.1545303
Arrindell WA, Van der Ende J (1985) Cross-sample invariance of the structure of self-reported distress and difficulty in assertiveness: experiences with the scale for interpersonal behaviour. Adv Behav Res Ther 7(4):205–243
Allan S, Gilbert P (1997) Submissive behaviour and psychopathology. Br J Clin Psychol 36(4):467–488
Rodrigues J, Ulrich N, Mussel P, Carlo G, Hewig J (2017) Measuring prosocial tendencies in Germany: sources of validity and reliablity of the revised prosocial tendency measure. Front Psychol 8:2119. https://doi.org/10.3389/fpsyg.2017.02119
De Graaf MMA, Allouch SB (2013) The relation between people’s attitude and anxiety towards robots in human-robot interaction. In: Proc - IEEE Int Work Robot Hum Interact Commun, pp 632–637, https://doi.org/10.1109/ROMAN.2013.6628419, https://ieeexplore.ieee.org/abstract/document/6628419/
Naneva S, Sarda Gou M, Webb TL, Prescott TJ (2020) A systematic review of attitudes, anxiety, acceptance, and trust towards social robots. Int J Soc Robot. https://doi.org/10.1007/s12369-020-00659-4
Reich N, Eyssel F (2013) Attitudes towards service robots in domestic environments: the role of personality characteristics, individual interests, and demographic variables. Paladyn J Behav Robot https://doi.org/10.2478/pjbr-2013-0014
Müller SL, Richert A (2018) The big-five personality dimensions and attitudes to-wards robots: a cross sectional study. In: ACM international conference proceeding series, pp 405–408, https://doi.org/10.1145/3197768.3203178
Walters ML (2009) An empirical framework for human-robot proxemics. In: Procs of new frontiers in human-robot interaction: symposium at the AISB09 convention, pp 144–149
Kaptein M, Markopoulos P, De Ruyter B, Aarts E (2015) Personalizing persuasive technologies: explicit and implicit personalization using persuasion profiles. Int J Hum Comput Stud 77:38–51. https://doi.org/10.1016/j.ijhcs.2015.01.004
Williams T, Szafir D, Chakraborti T, Phillips E (2019) Virtual, Augmented, and Mixed Reality for Human-Robot Interaction (VAM-HRI). In: ACM/IEEE Int. Conf. Human-Robot Interact., ACM, vol 2019-March, pp 671–672, https://doi.org/10.1109/HRI.2019.8673207, https://dl.acm.org/citation.cfm?id=3173561
Preacher KJ, MacCallum RC, Rucker DD, Nicewander WA (2005) Use of the extreme groups approach: a critical reexamination and new recommendations. Psychol Methods 10(2):178–192. https://doi.org/10.1037/1082-989X.10.2.178
Lanzer M, Babel F, Yan F, Zhang B, You F, Wang J, Baumann M (2020) Designing communication strategies of autonomous vehicles with pedestrians: an intercultural study. Proceedings - 12th International ACM conference on automotive user interfaces and interactive vehicular applications, AutomotiveUI 2020 pp 122–131, https://doi.org/10.1145/3409120.3410653
Syrdal DS, Dautenhahn K, Koay KL, Walters ML (2009) The negative attitudes towards robots scale and reactions to robot behaviour in a live human-robot interaction study. adaptive and emergent behaviour and complex systems - proceedings of the 23rd convention of the society for the study of artificial intelligence and simulation of behaviour, AISB 2009 pp 109–115
Kaplan AD, Sanders T, Hancock PA (2019) The relationship between extroversion and the tendency to anthropomorphize robots: a Bayesian analysis. Front Robot. https://doi.org/10.3389/frobt.2018.00135
Backonja U, Hall AK, Painter I, Kneale L, Lazar A, Cakmak M, Thompson HJ, Demiris G (2018) Comfort and attitudes towards robots among young, middle-aged, and older adults: a cross-sectional study. J Nurs Scholar 50(6):623–633. https://doi.org/10.1111/jnu.12430
Mavridis N (2015) A review of verbal and non-verbal human-robot interactive communication. Robot Autonom Syst 63:22–35. https://doi.org/10.1016/j.robot.2014.09.031
Lambert D (2004) Body language. Harper Collins, USA
Babel F, Kraus J, Miller L, Kraus M, Wagner N, Minker W, Baumann M (2021) Small Talk with a Robot? The impact of dialog content, talk initiative, and gaze behavior of a social robot on trust, acceptance, and proximity. Int J Soc Robot. https://doi.org/10.1007/s12369-020-00730-0
Brooks AG, Arkin RC (2007) Behavioral overlays for non-verbal communication expression on a humanoid robot. Autonom Robots 22(1):55–74. https://doi.org/10.1007/s10514-006-9005-8
Rossi G (2014) When do people not use language to make requests? In: Drew P, Couper-Kuhlen E (eds) Requesting in Social Interaction, John Benjamins Publishing Company, chap When do pe, pp 303–334, https://doi.org/10.1075/slsi.26.12ros, https://pure.mpg.de/rest/items/item_2057005_7/component/file_3282514/content
Lee SA, Liang YJ (2019) Robotic foot-in-the-door: using sequential-request persuasive strategies in human-robot interaction. Comput Human Behav 90:351–356. https://doi.org/10.1016/j.chb.2018.08.026
Acknowledgements
The authors would like to thank the students Mareike Schüle, Julia Gildehaus, and Madeleine Doig for their assistance with data acquisition.
Open Access
This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Funding
his research has been conducted within the interdisciplinary research project ‘RobotKoop’, which is funded by the German Ministry of Education and Research (Grant Number 16SV7967).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Ethical Statement
The study was carried out in accordance with the Declaration of Helsinki. Ethical approval was received from the ethical committee of Ulm University (No. 378/19).
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Babel, F., Vogt, A., Hock, P. et al. Step Aside! VR-Based Evaluation of Adaptive Robot Conflict Resolution Strategies for Domestic Service Robots. Int J of Soc Robotics 14, 1239–1260 (2022). https://doi.org/10.1007/s12369-021-00858-7
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12369-021-00858-7