Abstract
Many social robots have emerged in public places to serve people. For these services, the robots are assumed to be able to present internal aspects (i.e., mind, sociability) to engage and interact with people over the long term. In this paper, we propose a novel dialogue structure called experience-based dialogue to help a robot present and maintain a good interaction over the long term. This dialogue structure contains a piece of knowledge and a story about how the robot gained this knowledge, which are used to compose the robot’s experience-related utterances for sharing experiences of interacting with previous users other than just the current user and help it present its internal aspects. We conducted an experiment to test the effects of our proposed dialogue structure and measure them with some published subjective scales. The results showed that experience-based dialogue can help a robot obtain better evaluations in terms of perceived intelligence, sociability, mind, anthropomorphism, animacy, likability, level of acceptance, and positive user reaction.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Nowadays, more social robots are serving people in nursing homes [1], schools [2], shops [3], restaurants [4], information desks [5], etc. For these services, people and robots are assumed to interact over the long term. Thus, how to maintain long-term human–robot interactions has become a heated research topic. Many researchers have confirmed that presenting a robot’s internal aspects such as sociability [6,7,8], a mind [9,10,11,12,13,14,15], perceived intelligence [5, 16], likability [17], animacy [18] as well as anthropomorphic aspects [19] are important for maintaining human–robot interaction. In order to present a robot’s internal aspects, some researchers have utilized the robot’s experiences, such as greeting a user with his or her name that was obtained in past interactions [1, 5], showing the number of interaction times with a user [2], mentioning the user’s habits or behaviors that were obtained in past interactions [3, 4], and referring to past experiences of successfully doing something [20]. These ways to present the robot’s experiences have been demonstrated to facilitate long-term interaction to some extent.
However, the previous studies [1,2,3,4,5, 20] have two shortcomings. First, they did not present a general way of making dialogue to present a robot’s experiences: in other words, a dialogue structure. Because of the lack of a dialogue structure, we are not sure when and how to make a robot mention its experiences during a conversation. A dialogue structure for a robot to present its experiences is needed to develop practical applications for social robots. Second, previous studies focused only on sharing past experiences of interactions with the same person currently interacting with the robot. According to Langer et al. [21], sharing various stories related to the speaker’s own experiences demonstrates a mind in human–human conversation; we speculate that this method should also be useful for robots to show their mind. Although we suspect that robots should mention past experiences of interactions with not only the current user but also others, the effects of mentioning such experiences have not been investigated and are still unclear.
In this paper, we propose a novel dialogue structure called experience-based dialogue that let robots present experiences by referring to past interactions with users, including those who are not the current user. This structure gives robots the ability to share their experiences and make their utterances contain more story-like information than just mentioning the user’s name [1, 5], how many times they have interacted [2], or simply mentioning the user’s habits and behavior [3, 4].
To evaluate whether experience-based dialogue can help a robot present its internal aspects (i.e., mind, sociability, perceived intelligence, anthropomorphism, animacy) and sustain long-term human-robot interaction (i.e., acceptance of the robot and positive user reaction), we implemented a robot dialogue system and conducted an experiment by comparing experience-based dialogue to the knowledge-based dialogue structure and None condition. For the knowledge-based dialogue, the robot gives utterances that contain purely statistical data. Under the None condition, there were no insertions for experience or knowledge; a chatbot was used, and questions were inserted as in the other two conditions. Subjects were asked to evaluate the above aspects through a questionnaire. Our study and the related research on mind have evaluated this aspect based on the mind perception scale [22], which consists of experience and agency and has already been widely used in many related studies. Eighteenth-century philosophers distinguished the ability to think (reason) from the ability to feel (sentience), which has inspired some researchers to use the word “sentience” to represent a concept that is similar to or overlapping with the word “mind”. The mind perception scale contains some indexes for evaluating the ability to think in the agency factors and some indexes for evaluating the ability to feel in the expedience factors. To prevent potential confusion with such a comparison, it is better to adopt simple, consistent terminology. Detailed information on the mind perception scale is presented in Sect. 4.2.4
The rest of this paper is organized as follows. Related works are discussed in Sect. 2. The proposed dialogue structure is presented in Sect. 3. The experiment and results are discussed in Sect. 4. The conclusions are given in Sect. 5.
2 Related Works
2.1 Conveying Information About the Robot’s Internal State
Studies [12,13,14,15] have demonstrated that the evaluation of robots will be improved if they appropriately convey some information to express their mind on their own initiative during human–robot interaction.
According to Atkinson [15], if a robot can convey information via some appropriate external behavior to reflect its own internal state, the evaluation will improve. Nonverbal behavior is also useful for a robot to present its mind. Fong et al. [13] and Breazeal and Scassellati [14] endowed a robot with the ability to express its perception and understanding of the dialogue and collaboration during an interaction via nonverbal behavior so that the robot can convey its own mind. These studies demonstrated that nonverbal behavior can be used to convey the internal state of robot. However, this greatly relies on human imagination, so it is difficult to control what internal state is imagined. Furthermore, these studies did not consider a robot sharing experiences.
2.2 Presenting Experiences by Sharing a Name
In social application scenarios, the ability of a robot to handle interpersonal experiences plays an important role in presenting social relationship. In this case, a robot providing experiences about interacting with people could improve its evaluation [1, 5].
Gockley et al. [5] endowed a robot with the ability to remember visitors’ names, which would help it maintain a long-term relationship. Sabelli et al. [1] studied the long-term interaction between a robot and seniors in an elderly care home and showed that enabling a robot to call the elderly person’s name made him or her more willing to interact with the robot.
These studies demonstrated that a robot can imply its own experiences simply by calling the user’s name to improve its impression. However, it is not clear how the robot should behave after greeting the user. Moreover, these studies did not examine whether and how much their strategies could enhance the mind and sociability of the robot.
2.3 Presenting Experiences by Sharing the Interaction History
Mentioning the history of previous interactions during a human–robot interaction can reflect the connectivity of the robot with interactive objects, which can let the robot show its rapport with society on its own initiative and present the robot’s social ability and perceived intelligence [2, 3].
Kanda et al. [2] designed a robot that can remember not only user names but also the accumulated interaction times. In addition, they designed a mechanism for robots to learn something from human–robot interaction and express such experiences in a certain way in later interactions. In this experiment, they demonstrated that a robot showing its own experiences played a key role in triggering user interest. Kanda et al. [3] later expanded their work to design a robot as a shop guide and allowed the robot to present the dialogue history of users. This shop guide robot provided different greetings to frequent customers to build rapport and continue a previous discussed topic for shop advertisement. Such an interaction strategy helped the robot show its experience of talking with humans very well, promoted the willingness of people to interact with the robot, and improved the evaluation of perceived intelligence of the robot.
These studies confirmed that a robot showing its past interaction to current interlocutors can help improve its evaluation, but they did not discuss whether the effect would be maintained or deepened when a robot shares its experience about others who it previously interacted with. Additionally, the above studies did not discuss a desired structure of how to insert the robot’s experience into a human–robot conversation, which may lead to a lack of applicability and generalizability of the dialogue.
2.4 Presenting Experiences Through Personalized Conversations
Inserting a personalized conversation in human–human conversation improves the relationship and increases the attractiveness of the speaker. Omaggio [23] showed that the teacher effectiveness ratings obtained from supervisors and students are significantly correlated with the degree to which verbal interactions in the language classroom are personalized. The same knowledge can be analogized to human–robot conversations.
Lee et al. [4] developed the Snackbot, which can remember a user’s snack choice, service usage, and the robot’s own behavior history. The Snackbot can use this information as an experience of talking with a human to make personalized small talk during its snack delivery service. This personalized interaction strategy can reinforce a person’s rapport, cooperation, and engagement with the robot. The results indicated that sharing the experience of talking about preferences could improve the evaluation of the robot’s friendliness.
However, they still limited the robot to interacting with the same person for the shared experience. In addition, because the robot only shared its experience of interacting with the same person, the topics and chatting content could be limited; this may influence the user’s motivation. It was also unclear when and how a robot should insert its experience.
2.5 Using Scripting Techniques
To determine what kind of content or structure in the dialogue strategy affects the users’ perception of the robot, many studies use scripting techniques to better control the compared condition.
Vossen et al. [24] developed a system presenting a robot’s mind based on experiences and what other people told the robot. Specifically, they built a memory function to store and retrieve the knowledge obtained during human–robot interactions. However, they simply let the robot mention pure knowledge without a telling story about how the robot gained that knowledge, and they did not conduct an HRI experiment to examine whether the system can help a robot represent its mind.
Glas et al. [17] and Graaf et al. [18]. proposed a human–robot interaction strategy based on a personalized greeting, which presented the robot’s ability to remember people. Their interview results showed that the participants became familiar with the robot and would have liked to interact with it again, but the participants also hoped the robot could have deeper and wider conversations with them, which meant that their system needed an extended strategy for furthering the conversation and maintaining the long-term interaction.
Zheng et al. [25] and Richards et al. [6] have confirmed that interactions would become more enjoyable if the robot could remember and mention the current user’s shared information. However, they did not investigate the effects of the robot mentioning other users’ shared information rather than that of the current user. Our research complements this validation.
Generally speaking, our study aims to examine the effects of mentioning users’ shared experiences in human–robot interaction. The main difference between our study and previous research is that most previous studies investigated the effects of mentioning the users’ shared information, whereas our study evaluated the effects of mentioning other people’s shared experiences rather than that of the current user.
3 Experience-Based Dialogue
In short, we endowed a robot with the ability to present its experience by utilizing the experience-based dialogue structure. For content to be presented by experience-based dialogue, we treated the robot’s experience of interaction with a person. Telling one’s experience is an effective way to evoke the listener’s empathy because it implies that the robot has social relationships, and such relationships always exist for listeners [24, 26, 27]. Thus, experience-based dialogue should affect a person’s information cognition. It should also make the listener believe that the robot has agency, experience, and sociability.
3.1 Experiences of Communication Robots
Here, we distinguish between two types of experience.
Personal experience is centered around the activities of a robot. It involves stories where the robot found or discovered some knowledge by itself. For instance, “I saw a plane flying in the sky” can be a personal experience of a robot.
Joint experience with other people involves stories that happen during an interaction. These stories are about how a robot obtains knowledge from a human–robot interaction. For instance, “Chason told me that there was a plane flying in the sky” is the robot’s joint experience from talking with a person called Chason.
3.2 Experience and Knowledge
To formalize the structure of experience-based dialogue, we can categorize the message involved in the dialogue by using the concepts experience and knowledge. Here, knowledge is information that the speaker believes to be objective or a fact. Experience represents how the speaker obtains knowledge. By using Equations (1) and (2), we can define a knowledge message and experience message by what kind of information is involved. For example, “tissue paper is a sort of paper” is a knowledge message. If we process knowledge with experience as given in Equation (2), an experience message is obtained, such as “my programmer tells me that tissue paper is a sort of paper.” The experience message not only gives knowledge but also a story about how the speaker obtained this knowledge.
Fletcher et al. [24] reported that listening to stories of the speaker can induce brain activity in the listener, which helps the speaker connect with the audience and make it much more likely that they can imagine the speaker’s point of view. Therefore, the experience message is an effective method in interpersonal communication to make the listener feel the message content and engage the listener more in the conversation. In other words, this kind of narrative should attract interlocutors and is important for a social robot to motivate a human to interact with it.
Here, we demonstrate the details on how we made the experience messages in our experiment. Equation (3) was used to indicate that the robot is going to share an experience; we defined this as an experience flag sentence, in which the time indicates a date and someone indicates a person interacting with the robot. For example, a sentence that mentions the experience of history interaction may be “Last week, a girl with long hair came to my lab and talked with me.” This dialogue refers to an experience with other people. Sentences composed according to Equation (4) will be constructed like “She told me she liked eggs.” With Equations (3) and (4), a person can know when, where, and how the robot learned this knowledge.
4 Experiment
To demonstrate the effects of inserting an experience of talking with other people, a subjective English conversation experiment was conducted with the within-subject design to compare dialogue with an experience message, with a knowledge message, and without either type of message. The hypothesis was that robots will seem more mindful, intelligent, social, anthropomorphic, animacy, likable, improve subjects’ acceptance and obtain more positive user reactions with experience-based dialogue. We used a robot that performed these dialogues with subjects and evaluated the subjects’ impressions through a questionnaire. We designated the experiment conditions as follows: the experience-based dialogue used the experience message, the knowledge-based dialogue used the knowledge message, and the None condition used neither message.
4.1 Dialogue Systems
4.1.1 Dialogue Flow under Each Condition
Figure 1 illustrates the dialogue flows adopted for the three different conditions. All started with a short talk given by a chatbot that generated utterances in response to the user’s utterances. After exchanging utterances N times, the chatbot provided the experience message under the experience-based dialogue condition, knowledge message under the knowledge-based dialogue condition, and a short talk from the chatbot again under the None condition. Uner the experience-based dialogue condition, the chatbot asked a question about the user’s preference on the same topic mentioned in the experience message. The chatbot then asked an additional question on the same topic. Note that the chatbot asked the same questions under both the knowledge-based and None conditions. We set \(N=5\) in this experiment.
4.1.2 Details for Common Functions
The structure of our system is depicted in Fig. 2. We built a chatbot based on the seq2seq model [28]. We trained this model based on the NPS chat corpus of English conversations from NLTK [29]. For the preprocessing of this corpus, we separated sentences based on odd and even indices; odd sentences were used as training data, and even sentences were used as labels. Table 1 presents a dialogue example of the trained chatbot. Although the responses sometimes sounded weird, we used this chatbot to provide a short talk under all conditions.
4.1.3 Details for Experience-Based Dialogue
Because our aim was to endow a robot with the ability to share its past interaction experience, we needed to build a function to record the interaction information as the robot’s memory. The memory format is presented in Table 2. It was specifically designed for conversations on preferences. The preferences and additional information should be extracted and stored from the user’s answers to the preference and additional information questions from the robot. Note that our system can identify the polarity of the subject’s answer and add an appropriate verb: either “likes” or “dislikes.” However, the values for the user and gender required manual annotation.
To compose a sentence for the experience message, we made templates as given in Table 3. First, the system chooses the experience that is about to be introduced. Then, it randomly selects a template and completes the sentence by filling in values for the date, user, preference, and additional information corresponding to the chosen experience. Note that the appropriate gender form is chosen to match the gender. We convert the recorded time into a literal manner (e.g., yesterday, two days ago, last week). The example sentences created by this process are given in the first item of the experience-based column in Table 4.
4.1.4 Details for Knowledge-Based Dialogue
Because the knowledge-based dialogue was prepared for comparison to the experience-based dialogue, we did not build a sentence generation function for it. To balance the information involved in the knowledge message and experience message, we manually made the knowledge message corresponding to each experience message (see the first item of the knowledge-based column in Table 4).
4.2 Method
4.2.1 Subjects
Twenty-four students with a university education background and fluent in English (\(average\ age = 25.3\)) participated in this experiment. They participated in conversation trials under two of the three conditions: experience-based dialogue, knowledge-based dialogue, and None. In other words, 12 subjects participated in dialogues with an experience message and knowledge message, while the remaining 12 subjects participated in dialogues with an experience message and neither message. Note that the order of the dialogues were counter-balanced in both comparisons.
4.2.2 Apparatus
To properly balance the information conveyed by the experience message and knowledge message, we first collected experience messages and created a corresponding knowledge message. We performed an investigation on the Internet and around our university to get rough statistics about preferences among different groups of people. The results of the investigation were used to create the knowledge messages given in Table 5. The preferences and additional information were picked up from the sentence to generate an experience message. To make the sentences sound natural, we randomly chose “like” or “dislike” to follow the probabilities found in this investigation.
4.2.3 Procedure
Before starting our experiment, we explained the purpose and the procedure of the experiment to the participants. The participants agreed to participate in the experiment and filled out the consent form. The current study was approved by the ethics committee for research involving human subjects at Graduate School of Engineering Science, Osaka University. Following informed consent, the experimenter gave instructions on the experiment to the subjects and introduced the robot CommU, which was placed on a table (see Fig. 3). The experimenter told the subjects to talk casually with the robot. To start the conversation, they were asked to say “Hello” to the robot when ready. Before the conversation began, the experimenter asked the subjects to fill in a score for the level of acceptance of the robot. The dialogue described in Fig. 1 was repeated five times in total under each condition. When the conversation finished, the experimenter asked the subjects to fill in all items of the questionnaire listed in Table 6.
4.2.4 Measurement
Table 6 presents the scales used in this experiment. We used the mind perception scale [22, 30] to measure how humans attribute a mind to agents. We used the three-factor version [30], which consists of the positive experience (E+), negative experience (E-), and agency (A). The questions with E+ aim to measure the consciousness, desire, personality, pleasure, and pride. The questions with E- aim to measure embarrassment, fear, hunger, pain, and rage. The questions with A aim to measure communication, emotion recognition, joy, memory, morality, planning, self-control, and thought. Godspeed [31] includs factors such as anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety, all the factors consist of a small number of questions for evaluating agents. Felt social skill [32] was used to evaluate the sociability of the robot. In our analysis, the felt social skill was evaluated according to the all factor structure, which was extracted from all subjects who attended Naito’s study [32]. To evaluate the level of acceptance of the robot, we asked the following question before and after the experiment: “From 1 to 7, how much do you accept the robot?”
Participants in the experience-based dialogue condition and the knowledge-based dialogue condition were also empirically observed. Specifically, we asked three people with university educations to review videos of the experiment and count the participants’ positive reactions based on their evaluations of each participant’s facial expressions (e.g., smiling or frowning) and moods reflecting each experience message or knowledge message. The results were determined by a vote, and the majority (i.e., at least two out of three votes for each reaction). decided the result. We did not conduct empirical observations of the None condition because there was no experience message or knowledge message included.
4.2.5 Predictions
Prediction 1 The robot will score higher on agency (A), positive experience (E+), and negative experience (E-) with the experience-based dialogue condition than the other two conditions.
Prediction 2 The experience-based dialogue condition will have a higher score for the felt Social skill than the other two conditions.
Prediction 3 The experience-based dialogue condition will have higher scores for the items in Godspeed than the other two conditions.
Prediction 4: The experience-based dialogue condition will have higher scores for the level of acceptance of the robot and likeability than the other two conditions.
4.3 Results
4.3.1 None and Experience-Based Dialogue
Figure 4a shows the box plots for the level of acceptance before and after the experiment. There was no difference between the none (\(M=4.25\), \(\mathrm{SD}=1.60\)) and experience-based dialogue (\(M=4.16\), \(\mathrm{SD}=1.40\)) conditions (\(t(x)=-0.14\), n.s.) before the experiment. However, the experience-based dialogue condition had a higher level of acceptance (\(M=5.17\), \(\mathrm{SD}=1.59\)) than the None condition (\(M=3.42\), \(\mathrm{SD}=1.51\)) (\(t(x)=2.77\), \(p<0.05\), \(d=1.13\)) after the experiment, where d is Cohen’s d.
Figure 4b shows the box plots for the scores of anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety. Anthropomorphism was higher under the experience-based dialogue condition (\(M=2.81\), \(\mathrm{SD}=0.71\)) than the None condition (\(M=2.05\), \(\mathrm{SD}=0.60\)) (\(t(x)=2.87\), \(p<0.05\), \(d=1.17\)). Animacy was higher under the experience-based dialogue condition (\(M=3.02\), \(\mathrm{SD}=0.85\)) than the None condition (\(M=2.14\), \(\mathrm{SD}=0.55\)) (\(t(x)=3.04\), \(p<0.05\), \(d=1.05\)). Likeability was higher under the experience-based dialogue condition (\(M=3.57\), \(\mathrm{SD}=0.96\)) than the None condition (\(M=2.66\), \(\mathrm{SD}=0.79\)) (\(t(x)=2.50\), \(p<0.05\), \(d=1.32\)). Perceived intelligence was higher under the experience-based dialogue condition (\(M=3\), \(\mathrm{SD}=0.79\)) than the None condition (\(M=2.33\), \(\mathrm{SD}=0.66\)) (\(t(x)=2.23\), \(p<0.05\), \(d=0.91\)). There was no significant difference in perceived safety between the experience-based dialogue condition (\(M=2.86\), \(\mathrm{SD}=0.73\)) and None condition (\(M=2.86\), \(\mathrm{SD}=0.74\)) (\(t(x)=1.02\), n.s.).
Figure 4c shows the box plots of the factor scores for mind perception: agency (A), positive experience (E+), and negative experience (E-). E+ was higher under the experience-based dialogue condition (\(M=2.97\), \(\mathrm{SD}=0.90\)) than the None condition (\(M=2.11\), \(\mathrm{SD}=0.56\)) (\(t(x)=2.80\), \(p<0.05\), \(d=1.14\)). A was higher under the experience-based dialogue condition (\(M=3.23\), \(\mathrm{SD}=0.82\)) than the None condition (\(M=2.26\), \(\mathrm{SD}=0.57\)) (\(t(x)=3.35\), \(p<0.05\), \(d=1.36\)). E- was higher under the experience-based dialogue condition (\(M=2.67\), \(\mathrm{SD}=0.62\)) than the None condition (\(M=2.12\), \(\mathrm{SD}=0.58\)) (\(t(x)=2.23\), \(p<0.05\), \(d=0.91\)). Figure 4d shows the box plots of the factor scores for social skill. The score was higher under the experience-based dialogue condition (\(M=3.75\), \(\mathrm{SD}=1.42\)) than the None condition (\(M=2.25\), \(\mathrm{SD}=0.75\)) (\(t(x)=3.49\), \(p<0.05\), \(d=1.3\)).
4.3.2 Knowledge-Based Dialogue and Experience-Based Dialogue
Figure 5a shows the box plots of the level of acceptance before and after the experiment. There was no difference between the knowledge-based dialogue condition (\(M=4.83\), \(\mathrm{SD}=1.34\)) and experience-based dialogue (\(M=4.91\), \(\mathrm{SD}=1.68\)) conditions (\(t(x)=-0.13\), n.s.) before the experiment. However, the experience-based dialogue condition had a higher level of acceptance (\(M=6.08\), \(\mathrm{SD}=1.16\)) than the knowledge-based dialogue condition (\(M=4.41\), \(\mathrm{SD}=1.72\)) after the experiment (\(t(x)=2.77\), \(p<0.05\), \(d=1.13\)).
Figrue 5b shows the box plots of the scores for anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety. Anthropomorphism was higher under the experience-based dialogue condition (\(M=3.28\), \(\mathrm{SD}=0.72\)) than the knowledge-based dialogue condition (\(M=2.23\), \(\mathrm{SD}=0.57\)) (\(t(x)=3.92\), \(p<0.05\), \(d=1.36\)). Animacy was higher under the experience-based dialogue condition (\(M=3.32\), \(\mathrm{SD}=0.76\)) than the knowledge-based dialogue condition (\(M=2.5\), \(\mathrm{SD}=0.55\)) (\(t(x)=3.04\), \(p<0.05\), \(d=1.2\)). Perceived intelligence was higher under the experience-based dialogue condition (\(M=3.52\), \(\mathrm{SD}=0.64\)) than the knowledge-based dialogue condition (\(M=2.90\), \(\mathrm{SD}=0.47\)) (\(t(x)=2.70\), \(p<0.05\), \(d=1.1\)). There was no significant difference in likability between the experience-based dialogue condition (\(M=4.07\), \(\mathrm{SD}=0.51\)) and knowledge-based dialogue condition (\(M=3.70\), \(\mathrm{SD}=0.59\)) (\(t(x)=1.62\), n.s.). There was no significant difference in perceived safety between the experience-based dialogue condition (\(M=2.97\), \(\mathrm{SD}=0.41\)) and knowledge-based dialogue condition (\(M=2.72\), \(\mathrm{SD}=0.31\)) (\(t(x)=1.67\), n.s.).
Figure 5c shows the box plots of the factor scores for mind perception: agency (A), positive experience (E+), and negative experience (E-). E+ was higher under the experience-based dialogue condition (\(M=3.29\), \(\mathrm{SD}=0.53\)) than the knowledge-based dialogue condition (\(M=2.83\), \(\mathrm{SD}=0.52\)) (\(t(x)=2.22\), \(p<0.05\), \(d=0.88\)) . A was higher under the experience-based dialogue condition (\(M=3.51\), \(\mathrm{SD}=0.38\)) than the knowledge-based dialogue condition (\(M=3.08\), \(\mathrm{SD}=0.48\)) (\(t(x)=2.48\), \(p<0.05\), \(d=1.05\)). There was no significant difference in E- between the experience-based dialogue condition (\(M=2.58\), \(\mathrm{SD}=0.47\)) and knowledge-based dialogue condition (\(M=2.3\), \(\mathrm{SD}=0.51\)) (\(t(x)=1.45\), \(p<0.05\), \(d=0.57\)).
Figure 5d shows the box plots of the factor scores for social skill. The score was higher under the experience-based dialogue condition (\(M=4.08\), \(\mathrm{SD}=1.08\)) than the knowledge-based dialogue condition (\(M=2.83\), \(\mathrm{SD}=0.93\)) (\(t(x)=3.02\), \(p<0.05\), \(d=1.23\)).
In this paragraph, we state the results of the empirical observations. In our experiment, five experience or knowledge messages were used in each conversation, which were repeated in 12 trials in the experience- or knowledge-based dialogue condition, so that the participants’ reactions to a total of 60 messages could be observed. As Table 7 shows, the experience-based dialogue (\(M=3.5\), \(\mathrm{SD}=1.09\)) obtained significantly more positive reactions from participants than the knowledge-based dialogue (\(M=2.42\), \(\mathrm{SD}=1.08\)) (\(t(x)=2.44\), \(p<0.05\), \(d=0.99\)). Moreover, Fig. 6 clearly shows that the positive reaction curve of the knowledge-based dialogue decays more quickly than that of the experience-based dialogue.
5 Discussion
5.1 Verification of Predictions
Our proposed method significantly improved the robot’s level of acceptance, anthropomorphism, animacy, likability, perceived intelligence, agency, positive experience, negative experience, and sociability compared to the None condition. This indicates that the ability to convey information about human tendencies in the form of the robot’s communication of experience is useful to make the robot outperform in the above aspects that have been of interest in the field of human–robot interaction. Meanwhile, the proposed method also scored higher in terms of the level of acceptance, positive user reaction, anthropomorphism, animacy, perceived intelligence, agency, positive experience, and sociability than the knowledge-based dialogue condition. The knowledge-based dialogue condition can convey the same information without referring to the person who gave this information to the robot. This means that the difference (i.e., applying its own experience in communication with other persons) is an effective factor for the robot to be evaluated highly according to these aspects. In other words, except for negative experience, likability, and perceived safety, the results mostly agreed with Predictions 1–4. Additionally, the results of empirical observations revealed a significant difference between the experience- and knowledge-based dialogues, which was consistent with the results of the level of acceptance. This result supports our claim that the experience-based dialogue can better engage people and has the potential for sustaining long-term interaction.
We consider our verification to be credible because we carefully designed the dialogue content and structures for a fair comparison under different conditions. We generated the robot’s utterances solely by replacing the subject words (i.e., a person or statistics) and removing the source of information (i.e., experience or knowledge) in the robot’s utterance to ensure equivalent contents and balance the experience-based dialogue and knowledge-based dialogue.
5.2 Possible Practical Applications
For applications of social robots, interaction is indispensable; a good dialogue structure that can engage users is necessary for a robot to better interact with a human. A good dialogue structure concerning an experience can enhance the user’s motivation when interacting with a robot in public places [1,2,3,4,5,6, 12, 17, 18] and help improve the robot’s internal aspects such as sociability [2], mind [13, 14], and perceived intelligence [3]. Our findings greatly contribute to human–robot conversation by helping the robot better present its internal state to cover all aspects mentioned above and improve its human likeness by using the dialogue structure to convey an experience message when communicating with humans. The proposed dialogue structure can be flexibly adopted to an experience of listening to humans. Therefore, our results suggest that a robot can use experience-based dialogue to maintain a long-term interaction.
5.3 Limitations and Future Work
The lack of positive results for likability and perceived safety may indicate that their evaluation is easily influenced by the subjects’ opinion about treatment of private information. Some people may hesitate to listen to others’ experiences or imagine that such communication conveys their private information. Thus, our results showed no significant difference for this aspect. We need to design a conversation by which the robot obtains permission or trust from a human to convey private information.
With regard to the mind aspect, we did not find a significant difference in terms of negative experience (E-). This may have been caused by the choice of contents introduced by the robot. In our experiment, we only used the robot’s experience to casually talk with a person on his or her preferences. In other words, there were few chances for the robot to mention to any serious facts or events involving negative aspects.
However, if robots start being frequently used in the real world, they may have opportunities for many other types of experiences, such as observing a person’s dialogue without participating, observing a person’s behavior (e.g., walking, watching, working), and observing objects around the robot. These individual activities may give robots more opportunities to express their personalities or emotions; for example, the robot can say “I found everyone too busy to talk to me. I feel so bored.” In short, an experience message about these types of experiences would imply multimodal observation capabilities for how the robot perceives the world, as well as milder or less positive attitudes toward humans that may better simulate humanlike behavior involving negative experiences in a real human society.
In this study, we did not complete the entire human–robot dialogue system for experience-based dialogue; rather, we simply verified and studied how the robot performs in sharing experiences. An important part of our future work is that we need to establish a robot’s memory mechanism to remember the experience shared by users. Then, the robot could collect real experiences from human–robot interaction and mention them based on such experience-based dialogues.
6 Conclusions
At present, more social robots are becoming involved in nursing or companionship. In most cases, people are curious about the robot and interact enthusiastically at first but easily lose interest after a while. These social positions require robots to present their abilities and internal aspects during interaction to obtain a better evaluation from people. Robots can utilize empathy to induce people to infer the robots’ abilities, social relationships, personalities, etc.
In this paper, we drew from the literature in psychology and linguistics to construct the experience-based dialogue structure, where a piece of knowledge and a story about how the robot gained the knowledge are used to compose the robot’s utterance and help it present its internal aspects. Our results showed that experience-based dialogue can improve the evaluation of the robot in terms of the perceived intelligence, sociability, mind, anthropomorphism, animacy. Moreover, based on the higher scores for the level of acceptance and the numbers of positive user reactions, a robot with experience-based dialogue would be better able to maintain a long-term interaction.
References
Sabelli AM, Kanda T, Hagita N (2011) A conversational robot in an elderly care center: an ethnographic study. In: 2011 6th ACM/IEEE international conference on human–robot interaction (HRI). ACM, New York, pp 37–44
Kanda T, Sato R, Saiwaki N, Ishiguro H (2004) Friendly social robot that understands human’s friendly relationships. In: 2004 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, Piscataway, NJ
Kanda T, Shiomi M, Miyashita Z, Ishiguro H, Hagita N (2010) A communication robot in a shopping mall. IEEE Trans Robot 26(5):897–913
Lee MK, Forlizzi J, Kiesler S, Rybski P, Antanitis J, Savetsila S (2012) Personalization in HRI: a longitudinal field experiment. In: 7th ACM/IEEE international conference on human–robot interaction (HRI). IEEE, Piscataway, NJ
Gockley R, Bruce A, Forlizzi J, et al. (2005) Designing robots for long-term social interaction. In: 2005 IEEE/RSJ international conference on intelligent robots and systems. IEEE, Piscataway, NJ
Richards D, Bransky K (2014) ForgetMeNot: what and how users expect intelligent virtual agents to recall and forget personal conversational content. Int J Hum–Comput Stud 72(5):460–476
Baddoura R, Venture G (2013) Social vs. useful HRI: experiencing the familiar, perceiving the robot as a sociable partner and responding to its actions. Int J Soc Robot 5(4):529–547
Wiltshire TJ, Lobato EJC, Garcia DR, Fiore SM, Jentsch FG, Huang WH, Axelrod B (2015) Effects of robotic social cues on interpersonal attributions and assessments of robot interaction behaviors. Proc Hum Factors Ergon Soc Annu Meet 59(1):801–805
Stafford RQ, MacDonald BA, Jayawardena C, Wegner DM, Broadbent E (2014) Does the robot have a mind? Mind perception and attitudes towards robots predict use of an eldercare robot. Int J Soc Robot 6(1):17–32
Dreyfus HL, Dreyfus SE, Zadeh LA (1987) Mind over machine: the power of human intuition and expertise in the era of the computer. IEEE Expert 2(2):110–111
Searle JR (1980) Minds, brains, and programs. Behav Brain Sci 3(3):417–424
Breazeal C (2003) Toward sociable robots. Robot Auton Syst 42(3–4):167–175
Fong T, Thorpe C, Baur C (2003) Collaboration, dialogue, human–robot interaction. In: Robotics research. Springer, Berlin, Heidelberg, pp 255–266
Breazeal C, Scassellati B (1999) How to build robots that make friends and influence people. In: Proceedings 1999 IEEE/RSJ international conference on intelligent robots and systems. In: Human and environment friendly robots with high intelligence and emotional quotients. IEEE, Piscataway, NJ
Atkinson DJ (2015) Robot trustworthiness: guidelines for simulated emotion. In: Proceedings of the tenth annual ACM/IEEE international conference on human–robot interaction extended abstracts. ACM, New York
Leite I, Martinho C, Paiva A (2013) Social robots for long-term interaction: a survey. Int J Soc Robot 5(2):291–308
Glas DF, Wada K, Shiomi M, Kanda T, Ishiguro H, Hagita N (2017) Personal greetings: personalizing robot utterances based on novelty of observed behavior. Int J Soc Robot 9(2):181–198
De Graaf MM, Allouch SB, Klamer T (2015) Sharing a life with harvey: exploring the acceptance of and relationship-building with a social robot. Comput Hum Behav 43:1–14
Duffy BR (2003) Anthropomorphism and the social robot. Robot Auton Syst 42(3–4):177–190
Andrist S, Spannan E, Mutlu B (2013) Rhetorical robots: Making robots more effective speakers using linguistic cues of expertise. In: Proceedings of the 8th ACM/IEEE international conference on human–robot interaction. IEEE, Piscataway, NJ
Langer EJ (1989) Mindfulness. Addison-Wesley/Addison Wesley Longman, Boston
Gray HM, Gray K, Wegner DM (2007) Dimensions of mind perception. Science 315(5812):619
Omaggio AC (1982) The relationship between personalized classroom talk and teacher effectiveness ratings: some research results. Foreign Lang Ann 15(4):255–269
Vossen P, Baez S, Bajcetić L, Kraaijeveld B (2018, September) Leolani: a reference machine with a theory of mind for social communication. In: International conference on text, speech, and dialogue (pp 15–25). Springer, Cham
Zheng Xiqian, Glas Dylan F, Minato Takashi, Ishiguro Hiroshi (2019) Four memory categories to support socially-appropriate conversations in long-term HRI. In: Workshop on personalization in long-term human–robot interaction (14th annual ACM/IEEE international conference on human–robot interaction). Daegu, South Korea
Gallo C (2014) Talk like TED: the 9 public-speaking secrets of the world’s top minds. St. Martin’s Press, New York
Pivac M, Granić A (2017) Storytelling in web design: a case study. In: 2017 40th international convention on information and communication technology, electronics and microelectronics (MIPRO). IEEE, Piscataway, NJ
Qiu M, Li FL, Wang S, Gao X, Chen Y, Zhao W, Chen H, Huang J, Chu W (2017) AliMe chat: a sequence to sequence and rerank based chatbot engine. In: Proceedings of the 55th annual meeting of the association for computational linguistics. ACL, Stroudsburg, PA
Forsythand EN, Martell CH (2007) Lexical and discourse analysis of online chat dialog. In: International conference on semantic computing (ICSC 2007). IEEE, Piscataway, NJ
Kamide H, Takashima K, Arai T (2017) Development of Japanese version of the psychological scale of anthropomorphism. Jpn J Pers 25(3):218–225 (in Japanese)
Bartneck C, Croft E, Kulic D (2009) Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int J Soc Robot 1(1):71–81
Naito Y (2013) Congruence between self-evaluation and other-evaluation based on social skills. Rissho Univ Annu Rep Psychol 4:39–43 (in Japanese)
Kidd, C. D., & Breazeal, C. (2008, September). Robots at home: Understanding long-term human–robot interaction. In: 2008 IEEE/RSJ international conference on intelligent robots and systems. IEEE, pp 3230–3235
Fletcher PC, Happ é F, Frith U, Baker SC, Dolan RJ, Frackowiak RSJ, Frith CD (1995) Other minds in the brain: a functional imaging study of “theory of mind” in story comprehension. Cognition 57(2):109–128
alem M, Eyssel F, Rohlfing K, Kopp S, Joublin F (2013) To err is human (-like): effects of robot gesture on perceived anthropomorphism and likability. Int J Soc Robot 5(3):313–323
de Graaf MM, Allouch SB, van Dijk JA (2016) Long-term evaluation of a social robot in real homes. Interact Stud 17(3):462–491
Tannen D (1986) Introducing constructed dialogue in Greek and American conversational and literary narrative. Direct Indirect Speech 3:11–32
Acknowledgements
This work was supported by JST ERATO Grant Number JPMJER1401, Japan.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Fu, C., Yoshikawa, Y., Iio, T. et al. Sharing Experiences to Help a Robot Present Its Mind and Sociability. Int J of Soc Robotics 13, 341–352 (2021). https://doi.org/10.1007/s12369-020-00643-y
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12369-020-00643-y