1 Introduction

Polite encouragement plays an important role in improving human skills and forming favorable impressions [1]. Typical examples include social rewards and praise reflecting supportive attitudes and speech, which are often used to motivate and persuade others to do a particular task or to continue doing something. Past studies investigated the positive effects of polite encouragement in the context of increasing self-efficacy [2], motivation improvements [3, 4], improving the self-concept of children [5], better emotional reactions [6], and improving learning performances [7,8,9]. The influences of polite encouragement are also effective for physical activities, e.g., task performances and the consolidation of learning [10, 11].

Based on these results in human psychology literature, researchers have investigated the effects of polite encouragement from computers, including such anthropic agents as computer-based characters and physical robots. Past studies reported that praise from computers fosters positive feelings in people [12, 13]. Similar to human beings, polite encouragement from computers improved people's motivation and task performance [14,15,16]. Other studies reported that the number of robots that praised people influenced performance improvement [17], and such physical interactions as a robot that touches as it praises increased motivation [18]. Such knowledge is useful for designing the encouragement behaviors of robots for educational purposes, which is one effective use of perspective for robots in daily environments [19].

However, some studies reported the advantages of impolite encouragement, which is obviously the opposite of polite encouragement (e.g., comments fueled by social pressures by rude attitudes to motivate or persuade). For example, a past study reported that impolite behaviors boosted performance in the context of sports psychology [20]. A study in human–robot interaction reported that impolite encouragement from a robot increased exercise more than polite encouragement from it [21]. Another study focused on the influences of inconsistency between polite/impolite phrases and the postures of a social robot [22]. This study did not focus on encouragement effects; it reported that the consistency of politeness influenced recall task performance.

Although these studies suggest the possibility of using impolite encouragement in interaction with robots, scant research has directly compared the effects of polite and impolite encouragement from robots. Moreover, the latter might have different negative effects on interaction with people, even though their performances have improved. For example, human psychology literature suggests that impolite encouragement increases stress and anxiety [23]. Receiving impolite encouragement may influence and propagate people's behaviors in the context of negative reciprocity [24, 25]. Another study identified a carryover effect by a social robot, i.e., where people who interact with robots change their behaviors in subsequent interactions with others depending on the robot's attitude [26]. These studies suggest that impolite encouragement has lingering negative effects on robots and others. Therefore, we investigated the effects of polite and impolite encouragement from a robot (Fig. 1) in the context of performance improvement, mood, and attitude propagation.

Fig. 1
figure 1

Robot's polite/impolite encouragement

2 Theoretical Framework

2.1 Performance Improvement from Encouragement

People's behaviors are changed by the presence of others [27,28,29,30,31], including such well-known concepts as social facilitation and social loafing. Recent human–computer interaction studies also showed that the presence of robots and computer agents causes similar effects in the behaviors of human interactions [32,33,34] because humans regard such artificial beings as social others during interactions [35,36,37]. Other researchers showed that social cues from robots also affect human reactions and behaviors, similar to their embodiment. For example, several studies investigated how agents’ embodiment and social cues influence the behavior changes of interacting people [17, 38, 39]. Other studies discussed how the effects of such behavior changes might contribute to the applications of social robots [40, 41].

Influenced by these works, past studies investigated the effectiveness of polite encouragement from computers and robots, like praise as a kind of social cue during an interaction. For example, praise from robots and agents increased the task performances of human participants [14,15,16]. Another study reported that social robots that provide advice and encouragement based on different rationales change people’s decision-making [42]. Social robots are often designed to be polite because people evaluate such robots as more likable, considerate, and helpful [43,44,45].

However, other studies found opposite effects, i.e., a positive perspective of the impoliteness of social robots [21, 46, 47]. For example, negative feedback or impolite encouragement produced positive changes in behavior, including increased motivation to exercise and reduced electricity usage.

Therefore, it remains unknown which type of encouragement is better for robots that are motivating humans. A past study directly compared the effectiveness of polite and impolite encouragement and showed the advantages of the latter, although the effect was only investigated in an exercising context [21]. Other past studies investigated the effectiveness of polite encouragement by employing different tasks, e.g., learning or monotonous ones [14,15,16]. Since people often use encouragement to motivate others during such tasks, we employed it in our study. We believe that comparing polite and impolite encouragement with a variety of tasks will contribute to understanding which types are appropriate for robot behavior design.

Based on these related studies, we hypothesized that both types of encouragement will increase performance compared to no encouragement, similar to other kinds of social cues. In addition, following previous work [21], impolite encouragement will increase performance more than polite encouragement. The following are our main hypotheses:


Polite encouragement from a robot will improve performance.


Impolite encouragement from a robot will improve performance.


Impolite encouragement from a robot will improve performance more than polite encouragement.

2.2 Mood Changes Caused by Encouragement

Although people sometimes deem impolite robots to be amusing, others dislike them [48, 49]. Although improving performance in exchange for bad impressions might be reasonable [20], we focus on other negative effects. Impolite encouragement might increase stress and fuel such negative moods as anxiety; such feelings are obviously deleterious to mental health and decrease motivation [50]. We believe that investigating the effects of polite/impolite encouragement on perceived stress will increase the understanding of their effects.

We next hypothesized that impolite encouragement will increase the perceived stress and negative moods based on these considerations:


Impolite encouragement from a robot will increase the negative moods of people and negative impressions toward the robot more than polite encouragement.

2.3 Propagation Effects from Encouragement

Propagation effects is another topic that requires attention in polite/impolite encouragement. Human psychology literature reported that interaction with others influences post-interactions. For example, people who receive prosocial behaviors engage in more prosocial behaviors than others: the pay-it-forward phenomenon [51, 52]. Such opposite phenomena as chain-of-unfairness [44] and negative reciprocity [21, 22] argue that people negatively behave toward others because they themselves previously received similar negative interactions. Another study reported that such propagation outcomes, called carryover effects, occurred when people interacted with social robots that alienated them [26].

Past studies related to polite/impolite encouragement mainly focused on the effects on performance and perceived impressions. However, they focused less on the propagation effects caused by encouragement from robots. Based on past studies, we hypothesized that encouragement from robots will also have propagation effects on people and will change their behavior after interaction with robots:


People who receive polite encouragement from a robot will encourage others to be polite.


People who receive impolite encouragement from a robot will encourage others to be impolite.

2.4 2.4 Summary of our Study

Our current study investigated the above six hypotheses through two experiments. Experiment I tackled H1 to H4 by a test where a robot politely/impolitely encouraged participants as they performed a monotonous task. We compared the effects of polite/impolite encouragement by evaluating their performances and their perceived feelings. Experiment II tackled H4 and H5 by conducting an experiment where the participants encouraged a dummy participant who performed a monotonous task after receiving polite/impolite encouragement from a robot. We evaluated their encouragement types to compare the propagation effects of polite/impolite encouragement.

3 Experiment I

3.1 Participant

Forty-eight people participated: 24 females and 24 males. Their ages ranged from 20 to 59, and their average age was 40.2 (S. D. = 12.4). They applied through their registration at a temporary employment agency.

3.2 Environment

As shown in Fig. 1, we installed a display and a robot in our laboratory’s experiment room. The robot was on the left side of the display and made comments based on the experiment condition.

3.3 Participants’ Task

In this experiment, by referring to a behavioral economics study [53], we prepared an on-screen drag-and-drop task from a past study that investigated polite encouragement effects from robots [18, 54]. The GUI explained that a gray circle would appear on the left side and that their task was to drag as many of these balls as possible onto a dark square on the right side during the experiment. Then the participants repeatedly dragged circles to the specified square (Fig. 2).

Fig. 2
figure 2

Task GUI for drag-and-drop task: After participants drag a circle to a square, the former immediately disappears and returns to its original position

The task’s duration was six minutes, during which the robot commented every 30 s: 12 comments during the experiment. The speaking contents changed depending on the experiment condition described in the next subsection.

3.4 Robot and System Overview

Our experiment used Sota (Fig. 3), which has eight degrees of freedom (DOFs): three for its head, two for each arm, and one for its lower body. It is 28 cm tall. Sota was programmed to autonomously perform such idle motions as slowly swinging its arms while speaking.

Fig. 3
figure 3

Sota, a desktop-sized robot

Figure 4 shows an overview of our system and the relationships between it and our experiments. Experiment I measured the task’s performance with a task GUI (Fig. 2). The speech function chooses the encouragement comments based on the measured performance and the encouragement rules (described in the next subsection) and sends a command to the robot to play a sound. Note that a participants' task progress is recorded in a database for Experiment 2. The details are described in Sect. 5.

Fig. 4
figure 4

System overview

The experiment had a between-participant design. We prepared three conditions to investigate the effects of encouragement: polite, neutral, and impolite. The participants were randomly assigned to one of the three encouragement types, and identical gender ratios were maintained among them. In each condition, eight females and eight males were assigned.

3.4.1 Polite Condition

To prepare comments of polite encouragement, we followed past studies that focused on the effects of social rewards from robots [18, 54] and used the same task. Positive feedback is designed to praise the participants’ efforts and their performance changes during tasks. In addition, we designed the robot to encourage participants regardless of their performance politely. Since a past study reported that politeness might eventually decrease motivation [55], we also prepared neutral comments.

Based on the above consideration, we designed the robot to provide eight pre-defined polite encouragements, two polite encouragements based on their task performance, and two neutral comments (Fig. 5). Each comment consisted of two sentences. For example, in the pre-defined polite encouragements, the robot said, “Your performance is good. Your drag speed seems faster than in the previous trial,” regardless of the participants' performances.

Fig. 5
figure 5

Order of robot's comments

For polite encouragement based on the task performance, the robot used different polite comments by comparing the difference in the number of drag actions between the current 30 s and the previous 30 s. For example, even if the participant did fewer tasks than before, the robot politely encouraged her: “You are doing the task very carefully. Now try to increase your speed!” As neutral comments, for example, the robot announced that “three and a half minutes have elapsed. We are already past the halfway mark.”

3.4.2 Impolite Condition

To prepare impolite encouragement comments, we followed a past study that focused on their effects [21] and the concept of impoliteness [23] to avoid just providing negative feedback. In that previous work [21], since the impolite encouragement contents were designed roughly to resemble the polite encouragement content, the politeness was only changed to mitigate the context effects. We also changed the politeness based on prepared encouragement comments in the polite condition and kept the neutral comments.

Therefore, similar to the polite condition, the robot made eight pre-defined impolite encouragements, two impolite encouragements based on the task performance, and two neutral comments (Fig. 5). As in the polite condition, each comment consisted of two sentences. For example, in the pre-defined impolite encouragement, the robot said, “Your performance is not very good. Your task speed seems slower than in the previous trial,” regardless of the participants' performances.

For the impolite encouragement, which is based on the task performance, the robot also gave different comments by comparing the difference in the number of drag actions between the current 30 s and the previous 30 s. For example, even if the participant did more tasks than before, the robot impolitely encouraged him: “Your performance has only slightly improved. Focus!” We used the same sentence in the polite condition for the two natural comments.

3.4.3 Neutral Condition

In the neutral condition, the robot did not encourage the participants. It provided 12 neutral comments to announce the progress of the tasks.

3.5 Measurements and Analysis

We measured the task performance for the first half trials (i.e., combining the number of dragged circles during six trials, 1.5 min) and the last half trial as subjective items during the experiment. In our analysis of the task performances, we investigated the effects of two factors: time: first half/last half, and condition: polite/neutral/impolite. We employed the time factor to investigate the encouragement effects during the experiment and the condition factor to compare the effects of encouragement styles.

We also measured the participants' moods using the Japanese version of the Profile of Mood States (POMS2-A) [56] before and after the experiment. POMS2-A has 65 items that consist of seven subscales: Tension-Anxiety, Depression-Dejection, Anger-Hostility, Vigor-Activity, Fatigue-Inertia, Confusion-Bewilderment, and Friendliness. For our evaluation, we used the total mood disturbance (TMD), which is calculated by each subscale. Note that an increase in TMD indicates that people felt more stress and experienced a negative mood. In the analysis of the TMD values, we also investigated the effects of two factors: prepost: before/after, and condition: polite/neutral/impolite. We employed the former to investigate the effects of the changes in the moods of the participants.

In addition, we measured a likability subscale (five items) by the Godspeed scale [57] to investigate their perceived impressions toward the robot. For likability, we only used one factor (condition: polite/neutral/impolite) to investigate the encouragement effects on their perceived impressions of it.

3.6 Procedure

An experimenter clearly explained our study’s procedure to the participants who gave written informed consent to join this study, which was approved by the Ethics Committee of our institute.

After the instruction, the participants filled out a questionnaire (POMS2-A) to measure their mood before starting the experiment. Then they followed the instructions on the screen and performed a ten-second practice session, followed by a six-minute experimental session. The robot gave comments to the participants every 30 s during the experiment session based on the condition to which they were assigned. After the experimental session, they again filled out questionnaires (POMS2-A and likability scores) to measure their moods and their perceived impressions of the robot after the experiment.

4 Results of Experiment I

4.1 Performance

Figure 6 shows the average task performances in each condition between the first and last halves. We conducted a two-factor (time factor: first half/last half, and condition factor: polite/neutral/impolite) mixed measures ANOVA, and the result showed significant differences in the time factor (F (1, 45) = 44.482, p < 0.001, partial η2 = 0.497) and the interaction effects (F (2, 45) = 4.801, p = 0.013, partial η2 = 0.176). We found no significant difference in the condition factor (F (1, 45) = 0.081, p = 0.923, partial η2 = 0.004).

Fig. 6
figure 6

Task performance among conditions

Multiple comparisons with the Bonferroni method of the simple main effects showed significant differences: in the polite condition (last > first, p < 0.001) and in the impolite condition (last > first, p < 0.001). We found no significant difference in the neutral condition (p = 0.191). Therefore, the task performances only increased with the polite and impolite conditions. The latter did not show an advantage in the performance context over the former in this experiment.

4.2 Subjective Impressions

Figure 7 shows the difference between pre- and post-TMD values in each condition. An increasing value indicates a more negative mood. We conducted a one-factor (condition factor: polite/neutral/impolite) ANOVA, and the result showed a significant difference among conditions (F (2, 45) = 7.612, p = 0.01, partial η2 = 0.253). Multiple comparisons with the Bonferroni method showed significant differences between polite and impolite (p = 0.01), and neutral and impolite (p = 0.27). There were no significant differences between the polite and neutral conditions (p = 0.910).

Fig. 7
figure 7

Difference between pre- and post-total mood disturbance (TMD) values among conditions: Increasing value indicates more negative mood

Figure 8 shows the likeability values in each condition. We conducted a one-factor (condition factor: polite/neutral/impolite) ANOVA, and the result showed a significant difference in the condition factor (F (2, 45) = 13.700, p < 0.001, partial η2 = 0.378). Multiple comparisons with the Bonferroni method showed a significant difference (polite > impolite, p < 0.001). There was no significant difference in the other combinations.

Fig. 8
figure 8

Likeability among conditions

Therefore, the impolite encouragement significantly increased the TMD values (indicating a more negative mood) and negative impressions toward the robot.

4.3 Summary of Experiment I

As shown in this experiment’s results, both polite and impolite encouragement increased the task performances compared to without any encouragement settings. However, unlike a past study [21], the impolite encouragements showed no advantage compared to the polite encouragements. Rather, they caused a significant negative mental status, which might have discouraged people from encouragement. Thus, H1, H2, and H4 are supported, but not H3.

5 Experiment II

Next, related to hypotheses H5 and H6, we conducted a second experiment to investigate the propagation effects of polite/impolite encouragement from the robot.

5.1 Participant

The same human participants from Experiment I joined Experiment II immediately after the former ended.

5.2 Environment

We used the same experimental room as in Experiment I.

5.3 Participants’ Task

In Experiment II, the human participants played a similar role to the robot as in Experiment I. They observed the six-minute task progress of a dummy participant and selected encouragement contents every 30 s.

The dummy participant’s task progress is identical as that of each human participant, i.e., we replayed the progress of the recorded tasks from Experiment I. Since the task progresses of each participant were individually different, the human participant's impressions may be different if we show fixed-task progress as experiment stimuli due to their different performances. Therefore, we showed the same task progress with each participant for Experiment II.

To announces the existence of the dummy participant toward the human participants, we showed a computer-graphics-based agent on the screen. (Fig. 9a). The human participants selected polite/neutral/impolite comments from the GUI every 30 s during the experiment. Each selected comment was played aloud on the system. Therefore, the human participants can hear the comments that they selected. The same comments in Experiment I were used in Experiment II.

Fig. 9
figure 9

GUI for Experiment II

5.4 Conditions

Experiment II had a between-participant design like Experiment I (polite/neutral/impolite); in each condition, eight females and eight males were assigned. The stimuli differences between the conditions were the robot's comments in Experiment I. In Experiment II, the human participants observed replays of their own task progress and commented using the same system.

5.5 Measurements and Analysis

We measured the ratio of polite and impolite choices during the experiment as a subjective item. In our analysis of the task performances, we investigated the effects of two factors: choice: polite/impolite, and condition: polite/neutral/impolite. We employed the former to investigate the attitude propagation effects caused by the condition factor.

We also explored the measurements of the feelings of differences between their own performance in Experiment I and the observed performance in Experiment II (Perceived differences between one’s own and observed performances). The item was assessed on a 0-to-10 response format, where 0 indicates that the observed performance is better, and 10 indicates that one’s own performance is better. We are interested in whether the robot’s encouragement in Experiment I changed the human participants’ confidence in their own performances. A previous study described the effectiveness of boosting self-esteem by offering praise [51], which promotes positive and helping behaviors [58]. In analyzing such feelings, we only used one factor (condition factor: polite/neutral/impolite) to investigate the encouragement effects on their performance.

5.6 Procedure

After Experiment I, the human participants were relocated to another room, where they waited for about ten minutes before the next experimental setting started. The experimenter stopped the robot's movement and launched the system to display the dummy participant. The human participants returned to the experimental room and listened to an explanation about the next task: commenting about another human participant (i.e., a dummy participant) who will start the same experiment, as the robot did before the experiment. The experimenter explained that the other human participant was in another location (Fig. 9a) and showed the dummy participant as a computer-based agent on display.

Next, the experimenter requested the human participant to observe the dummy participant's task progress and select every 30 s polite/neutral/impolite negative comments toward it. The human participants observed the first ten seconds (Fig. 9b) and selected/changed a comment from the three candidates during the last 20 s (Fig. 9c). If the human participants did not select comments within 30 s, a neutral comment was autonomously selected.

The total experiment time was six minutes, which is identical to Experiment I. Therefore, the human participants selected 12 comments in total. As described above, the dummy task progress is the replayed task progress of the human participants. After finishing their replies, they filled out a questionnaire and attended a debriefing session. We explained that the other participant was a dummy, and the task progress was replayed from their own task progress in Experiment I. None of the participants noticed that their own task progress had been replayed.

6 Results of Experiment II

6.1 Propagation Effects

Figure 10 shows the ratios of the polite/impolite selections in each condition. We conducted a two-factor (choice factor: polite/impolite, and condition factor: polite/neutral /impolite) mixed measures ANOVA, and the result showed significant differences in the interaction effect (F (2, 45) = 7.166, p = 0.002, partial η2 = 0.242) and the choice factor (F (1, 45) = 259.316, p < 0.001, partial η2 = 0.852). There were no significant differences in the condition factor (F (2, 45) = 0.803, p = 0.454, partial η2 < 0.034). Multiple comparisons with the Bonferroni method of the simple main effects showed significant differences: in the polite choice (polite > normal, p = 0.010, polite > impolite, p = 0.044) and in the negative choice (normal > polite, p = 0.032, impolite > polite, p = 0.045).

Fig. 10
figure 10

Number of comment types among conditions

Therefore, the participants in the polite condition selected more polite comments than the participants in the impolite condition, and those in the polite condition selected fewer negative comments than those in the impolite condition. Thus, H5 and H6 are supported.

6.2 Performance Comparison with Replayed Tasks

Figure 11 shows the questionnaire results of the performance comparison in each condition. We conducted a one-factor (condition factor: polite/neutral/impolite) ANOVA, and the result showed a significant difference among conditions (F (2, 45) = 3.726, p = 0.032, partial η2 = 0.142). Multiple comparisons with the Bonferroni method showed a significant difference between polite and impolite (p = 0.032). Thus, the participants in the polite condition in Experiment I felt that their own performances were better than the observed performances compared of the participants in the impolite condition.

Fig. 11
figure 11

Perceived performance comparison among conditions

7 Discussion

7.1 Implications for Performance and Mood

Similar to past studies, our experiment results showed both the positive and negative perspectives of impolite encouragement. Table 1 summarizes both experiments, showing which hypotheses were supported/rejected. Figure 12 illustrates the experiment results.

Table 1 Summary of hypotheses
Fig. 12
figure 12

Illustration of experiment results

Following Experiment I, impolite encouragement increased the performance of monotonous tasks more than neutral comments. However, unlike a past study that described the advantage of impolite encouragement in an exercising task compared to polite encouragement [21], we found no significant difference between them. Possible reasons would be the difference in the task settings, the polite/impolite encouragement contents from the robot, and the robot itself.

Moreover, we found a negative perspective of impolite encouragement in the context of mental status, i.e., significantly increasing negative moods. Our study suggests that impolite encouragement improved performance at the expense of increased stress and anxiety. On the other hand, polite encouragement improved performance without such disadvantages. Negative moods sparked by impolite encouragement create another problem in building positive relationships between robots and people.

Well-designed impolite behaviors might provide a useful perspective in specific situations because impolite and aggressive behaviors are sometimes observed in friendly relationships in human–human interaction, particularly in children [59, 60]. However, impolite behaviors invite another risk, because they negatively affect people who observe such interactions [61,62,63]. Since these previous studies only focused on people’s rudeness, the effects of a robot's rudeness remain unknown. However, following the results of past studies as well as ours in this work, observing a robot's rudeness might also have negative effects.

7.2 Implications for Propagation

Based on the results of Experiment II, we also identified a positive perspective of polite encouragement and a negative perspective of impolite encouragement from the propagation effect viewpoint on which past studies focused less. Even though a robot is not human, it can provide polite/impolite encouragement, and the participants' choices are significantly influenced. The questionnaire results also suggest that polite encouragement helps build confidence in a person’s own abilities. A robot's praise may give participants confidence and psychological space, enabling them to encourage others politely.

Robot developers must carefully contemplate their robots' behaviors from the perspective of propagation effects. For example, suppose people engage in more impolite behavior toward others because of past interactions with an impolite robot. In that case, others might witness such rude and impolite behaviors that cause other negative effects [61,62,63], propagating further negative effects.

Moreover, such propagation effects might increase their negative impact through long-term interaction. Still, it is unknown whether such negative effect propagations are mitigated due to continuous interaction. But if such effects accumulate, people’s performance may decrease due to low confidence, and negative attitudes toward others would complicate efforts to achieve smooth interaction with others. Whether the positive effects of polite encouragement continue during long-term interaction remains unknown. Additional work is needed to understand the changes in the positive/negative effects of polite/impolite encouragement in long-term interaction.

7.3 Necessity of Robot Embodiment

Our experiment used physical robots to investigate encouragement effects from artificial agents. Next we discuss whether non-embodiment agents, such as on-screen agents, provide similar effects compared to robots.

We thought that using on-screen agents provides similar effects to robot agents because a past study reported similar praise effects between embodied and non-embodied agents (i.e., robots and screen agents) [17]. The advantage of an embodied agent in encouragement contexts is physical interaction. A previous study reported that tactile stimuli are essential to increase motivation [18]. Therefore, if encouragement does not include physical interaction, non-embodied agents would probably be effective embodied agents. In this study, we did not use touch interaction for encouragement, although it might effectively increase participants’ motivations.

Instead, using multiple agents effectively increased performance regardless of the embodiment of the agents that encouraged people. Several studies reported that praise from multiple agents is effective for improvements [64, 65]. On the other hand, since the effects of impolite encouragement from multiple agents are unknown, perhaps stronger negative effects might appear.

7.4 Optimized Encouragement for Individuals

We employed simple and basically fixed encouragement contents to compare polite and impolite encouragement. However, several studies investigated the effects of different forms of encouragement, such as praising ability and effort [66, 67]. Another study investigated the relationship between children's personalities and the effects of praise types from their parents [68]. Therefore, one future work of this study is to achieve appropriate encouragement content for individuals.

For this purpose, robots or computer agents need a function that assesses personalities. Already several research works have developed such functions by observing the interaction between robots and people [69,70,71]. Others investigated the effects of offering study breaks to people [72, 73], an approach that might also be important to maintain motivation and increase performance, like encouragement.

7.5 Limitation and Future Work

Our experiment’s findings have several limitations. We used a monotonous task to investigate performance changes and propagation effects. We do not know whether similar effects will occur under different task settings, such as exercising. Since our experiment has a short-term scenario, the long-term interaction effects of polite/impolite encouragement are also unknown, as described above. In addition, the speech contents of the polite/impolite encouragements are relatively simple. We must analyze encouragement strategies [74] and implement them with robots [75].

Another limitation is our robot design; we only used a desktop-sized robot. If we used different robots, e.g., a larger humanoid robot like Pepper or a more human-like robot like an android [56,57,58], the encouragement effects would undoubtedly be different. Investigating the appearance and voice effects in encouragement is interesting future work.

The number of participants was also relatively small. Therefore, replicating a similar experiment setting is important to investigate the reproducibility of our study. However, our experiment results showed a relatively large effect size when significant differences were observed. Therefore, we believe that our experiment settings and results are satisfactory in this study.

8 Conclusion

Polite encouragement is one effective way to improve human performance. Robotics researchers have focused on its positive effects from robots in supporting people, and recently some studies have even reported more positive effects of impolite than polite encouragement. Still, the advantages/disadvantages of polite/impolite encouragement remain unknown since relatively few studies have directly compared them.

Therefore, we investigated the effects of polite/impolite encouragement from robots in two experiments. In our first experiment, we evaluated the effects of performance improvement and the perceived stresses from polite/impolite encouragement. The participants performed a monotonous task, and our robot provided polite/impolite encouragement or neutral comments. Both types of encouragement increased performance, although impolite encouragement provided no significant advantage, unlike past studies. We identified an additional disadvantage of impolite encouragement: a significant increase in participants' perceived stress.

In the second, we evaluated the propagation effects of encouragement. The participants selected polite/neutral/impolite comments about a dummy participant who was engaged in the same monotonous task based on the recorded data from identical participants. Our experiment results showed that the robot's encouragement styles significantly influenced the participant choices in the first experiment. Thus, the robot’s polite/impolite attitudes were propagated to the participants, whose choices became more polite/impolite. This result shows another advantage of polite encouragement as well as another disadvantage of impolite encouragement. These results provide evidence for encouraging with a polite behavior design for robots and discouraging with an impolite behavior design for robots.