1 Introduction

Nowadays, more social robots are serving people in nursing homes [1], schools [2], shops [3], restaurants [4], information desks [5], etc. For these services, people and robots are assumed to interact over the long term. Thus, how to maintain long-term human–robot interactions has become a heated research topic. Many researchers have confirmed that presenting a robot’s internal aspects such as sociability [6,7,8], a mind [9,10,11,12,13,14,15], perceived intelligence [5, 16], likability [17], animacy [18] as well as anthropomorphic aspects [19] are important for maintaining human–robot interaction. In order to present a robot’s internal aspects, some researchers have utilized the robot’s experiences, such as greeting a user with his or her name that was obtained in past interactions [1, 5], showing the number of interaction times with a user [2], mentioning the user’s habits or behaviors that were obtained in past interactions [3, 4], and referring to past experiences of successfully doing something [20]. These ways to present the robot’s experiences have been demonstrated to facilitate long-term interaction to some extent.

However, the previous studies [1,2,3,4,5, 20] have two shortcomings. First, they did not present a general way of making dialogue to present a robot’s experiences: in other words, a dialogue structure. Because of the lack of a dialogue structure, we are not sure when and how to make a robot mention its experiences during a conversation. A dialogue structure for a robot to present its experiences is needed to develop practical applications for social robots. Second, previous studies focused only on sharing past experiences of interactions with the same person currently interacting with the robot. According to Langer et al. [21], sharing various stories related to the speaker’s own experiences demonstrates a mind in human–human conversation; we speculate that this method should also be useful for robots to show their mind. Although we suspect that robots should mention past experiences of interactions with not only the current user but also others, the effects of mentioning such experiences have not been investigated and are still unclear.

In this paper, we propose a novel dialogue structure called experience-based dialogue that let robots present experiences by referring to past interactions with users, including those who are not the current user. This structure gives robots the ability to share their experiences and make their utterances contain more story-like information than just mentioning the user’s name [1, 5], how many times they have interacted [2], or simply mentioning the user’s habits and behavior [3, 4].

To evaluate whether experience-based dialogue can help a robot present its internal aspects (i.e., mind, sociability, perceived intelligence, anthropomorphism, animacy) and sustain long-term human-robot interaction (i.e., acceptance of the robot and positive user reaction), we implemented a robot dialogue system and conducted an experiment by comparing experience-based dialogue to the knowledge-based dialogue structure and None condition. For the knowledge-based dialogue, the robot gives utterances that contain purely statistical data. Under the None condition, there were no insertions for experience or knowledge; a chatbot was used, and questions were inserted as in the other two conditions. Subjects were asked to evaluate the above aspects through a questionnaire. Our study and the related research on mind have evaluated this aspect based on the mind perception scale [22], which consists of experience and agency and has already been widely used in many related studies. Eighteenth-century philosophers distinguished the ability to think (reason) from the ability to feel (sentience), which has inspired some researchers to use the word “sentience” to represent a concept that is similar to or overlapping with the word “mind”. The mind perception scale contains some indexes for evaluating the ability to think in the agency factors and some indexes for evaluating the ability to feel in the expedience factors. To prevent potential confusion with such a comparison, it is better to adopt simple, consistent terminology. Detailed information on the mind perception scale is presented in Sect. 4.2.4

The rest of this paper is organized as follows. Related works are discussed in Sect. 2. The proposed dialogue structure is presented in Sect. 3. The experiment and results are discussed in Sect. 4. The conclusions are given in Sect. 5.

2 Related Works

2.1 Conveying Information About the Robot’s Internal State

Studies [12,13,14,15] have demonstrated that the evaluation of robots will be improved if they appropriately convey some information to express their mind on their own initiative during human–robot interaction.

According to Atkinson [15], if a robot can convey information via some appropriate external behavior to reflect its own internal state, the evaluation will improve. Nonverbal behavior is also useful for a robot to present its mind. Fong et al. [13] and Breazeal and Scassellati [14] endowed a robot with the ability to express its perception and understanding of the dialogue and collaboration during an interaction via nonverbal behavior so that the robot can convey its own mind. These studies demonstrated that nonverbal behavior can be used to convey the internal state of robot. However, this greatly relies on human imagination, so it is difficult to control what internal state is imagined. Furthermore, these studies did not consider a robot sharing experiences.

2.2 Presenting Experiences by Sharing a Name

In social application scenarios, the ability of a robot to handle interpersonal experiences plays an important role in presenting social relationship. In this case, a robot providing experiences about interacting with people could improve its evaluation [1, 5].

Gockley et al. [5] endowed a robot with the ability to remember visitors’ names, which would help it maintain a long-term relationship. Sabelli et al. [1] studied the long-term interaction between a robot and seniors in an elderly care home and showed that enabling a robot to call the elderly person’s name made him or her more willing to interact with the robot.

These studies demonstrated that a robot can imply its own experiences simply by calling the user’s name to improve its impression. However, it is not clear how the robot should behave after greeting the user. Moreover, these studies did not examine whether and how much their strategies could enhance the mind and sociability of the robot.

2.3 Presenting Experiences by Sharing the Interaction History

Mentioning the history of previous interactions during a human–robot interaction can reflect the connectivity of the robot with interactive objects, which can let the robot show its rapport with society on its own initiative and present the robot’s social ability and perceived intelligence [2, 3].

Kanda et al. [2] designed a robot that can remember not only user names but also the accumulated interaction times. In addition, they designed a mechanism for robots to learn something from human–robot interaction and express such experiences in a certain way in later interactions. In this experiment, they demonstrated that a robot showing its own experiences played a key role in triggering user interest. Kanda et al. [3] later expanded their work to design a robot as a shop guide and allowed the robot to present the dialogue history of users. This shop guide robot provided different greetings to frequent customers to build rapport and continue a previous discussed topic for shop advertisement. Such an interaction strategy helped the robot show its experience of talking with humans very well, promoted the willingness of people to interact with the robot, and improved the evaluation of perceived intelligence of the robot.

These studies confirmed that a robot showing its past interaction to current interlocutors can help improve its evaluation, but they did not discuss whether the effect would be maintained or deepened when a robot shares its experience about others who it previously interacted with. Additionally, the above studies did not discuss a desired structure of how to insert the robot’s experience into a human–robot conversation, which may lead to a lack of applicability and generalizability of the dialogue.

2.4 Presenting Experiences Through Personalized Conversations

Inserting a personalized conversation in human–human conversation improves the relationship and increases the attractiveness of the speaker. Omaggio [23] showed that the teacher effectiveness ratings obtained from supervisors and students are significantly correlated with the degree to which verbal interactions in the language classroom are personalized. The same knowledge can be analogized to human–robot conversations.

Lee et al. [4] developed the Snackbot, which can remember a user’s snack choice, service usage, and the robot’s own behavior history. The Snackbot can use this information as an experience of talking with a human to make personalized small talk during its snack delivery service. This personalized interaction strategy can reinforce a person’s rapport, cooperation, and engagement with the robot. The results indicated that sharing the experience of talking about preferences could improve the evaluation of the robot’s friendliness.

However, they still limited the robot to interacting with the same person for the shared experience. In addition, because the robot only shared its experience of interacting with the same person, the topics and chatting content could be limited; this may influence the user’s motivation. It was also unclear when and how a robot should insert its experience.

2.5 Using Scripting Techniques

To determine what kind of content or structure in the dialogue strategy affects the users’ perception of the robot, many studies use scripting techniques to better control the compared condition.

Vossen et al. [24] developed a system presenting a robot’s mind based on experiences and what other people told the robot. Specifically, they built a memory function to store and retrieve the knowledge obtained during human–robot interactions. However, they simply let the robot mention pure knowledge without a telling story about how the robot gained that knowledge, and they did not conduct an HRI experiment to examine whether the system can help a robot represent its mind.

Glas et al. [17] and Graaf et al. [18]. proposed a human–robot interaction strategy based on a personalized greeting, which presented the robot’s ability to remember people. Their interview results showed that the participants became familiar with the robot and would have liked to interact with it again, but the participants also hoped the robot could have deeper and wider conversations with them, which meant that their system needed an extended strategy for furthering the conversation and maintaining the long-term interaction.

Zheng et al. [25] and Richards et al. [6] have confirmed that interactions would become more enjoyable if the robot could remember and mention the current user’s shared information. However, they did not investigate the effects of the robot mentioning other users’ shared information rather than that of the current user. Our research complements this validation.

Generally speaking, our study aims to examine the effects of mentioning users’ shared experiences in human–robot interaction. The main difference between our study and previous research is that most previous studies investigated the effects of mentioning the users’ shared information, whereas our study evaluated the effects of mentioning other people’s shared experiences rather than that of the current user.

3 Experience-Based Dialogue

In short, we endowed a robot with the ability to present its experience by utilizing the experience-based dialogue structure. For content to be presented by experience-based dialogue, we treated the robot’s experience of interaction with a person. Telling one’s experience is an effective way to evoke the listener’s empathy because it implies that the robot has social relationships, and such relationships always exist for listeners [24, 26, 27]. Thus, experience-based dialogue should affect a person’s information cognition. It should also make the listener believe that the robot has agency, experience, and sociability.

3.1 Experiences of Communication Robots

Here, we distinguish between two types of experience.

Personal experience is centered around the activities of a robot. It involves stories where the robot found or discovered some knowledge by itself. For instance, “I saw a plane flying in the sky” can be a personal experience of a robot.

Joint experience with other people involves stories that happen during an interaction. These stories are about how a robot obtains knowledge from a human–robot interaction. For instance, “Chason told me that there was a plane flying in the sky” is the robot’s joint experience from talking with a person called Chason.

3.2 Experience and Knowledge

To formalize the structure of experience-based dialogue, we can categorize the message involved in the dialogue by using the concepts experience and knowledge. Here, knowledge is information that the speaker believes to be objective or a fact. Experience represents how the speaker obtains knowledge. By using Equations (1) and (2), we can define a knowledge message and experience message by what kind of information is involved. For example, “tissue paper is a sort of paper” is a knowledge message. If we process knowledge with experience as given in Equation (2), an experience message is obtained, such as “my programmer tells me that tissue paper is a sort of paper.” The experience message not only gives knowledge but also a story about how the speaker obtained this knowledge.

$$\begin{aligned} \mathrm{Knowledge}\ \mathrm{Message}= & {} \mathrm{Knowledge } \end{aligned}$$
(1)
$$\begin{aligned} \mathrm{Experience}\ \mathrm{Message}= & {} \mathrm{Knowledge}+\mathrm{Experience} \end{aligned}$$
(2)

Fletcher et al. [24] reported that listening to stories of the speaker can induce brain activity in the listener, which helps the speaker connect with the audience and make it much more likely that they can imagine the speaker’s point of view. Therefore, the experience message is an effective method in interpersonal communication to make the listener feel the message content and engage the listener more in the conversation. In other words, this kind of narrative should attract interlocutors and is important for a social robot to motivate a human to interact with it.

Here, we demonstrate the details on how we made the experience messages in our experiment. Equation (3) was used to indicate that the robot is going to share an experience; we defined this as an experience flag sentence, in which the time indicates a date and someone indicates a person interacting with the robot. For example, a sentence that mentions the experience of history interaction may be “Last week, a girl with long hair came to my lab and talked with me.” This dialogue refers to an experience with other people. Sentences composed according to Equation (4) will be constructed like “She told me she liked eggs.” With Equations (3) and (4), a person can know when, where, and how the robot learned this knowledge.

$$\begin{aligned}&<\mathrm{time}>+<\mathrm{someone}>+(\mathrm{talked \,to\, me})\,or(\ldots ) \end{aligned}$$
(3)
$$\begin{aligned}&<\mathrm{someone}>+(\mathrm{told \,me}) \,or\, (said\,that) \,or\,(\ldots ) \end{aligned}$$
(4)

4 Experiment

To demonstrate the effects of inserting an experience of talking with other people, a subjective English conversation experiment was conducted with the within-subject design to compare dialogue with an experience message, with a knowledge message, and without either type of message. The hypothesis was that robots will seem more mindful, intelligent, social, anthropomorphic, animacy, likable, improve subjects’ acceptance and obtain more positive user reactions with experience-based dialogue. We used a robot that performed these dialogues with subjects and evaluated the subjects’ impressions through a questionnaire. We designated the experiment conditions as follows: the experience-based dialogue used the experience message, the knowledge-based dialogue used the knowledge message, and the None condition used neither message.

4.1 Dialogue Systems

4.1.1 Dialogue Flow under Each Condition

Fig. 1
figure 1

Dialogue flows for the a none condition, b experience-based dialogue, and c knowledge-based dialogue

Figure 1 illustrates the dialogue flows adopted for the three different conditions. All started with a short talk given by a chatbot that generated utterances in response to the user’s utterances. After exchanging utterances N times, the chatbot provided the experience message under the experience-based dialogue condition, knowledge message under the knowledge-based dialogue condition, and a short talk from the chatbot again under the None condition. Uner the experience-based dialogue condition, the chatbot asked a question about the user’s preference on the same topic mentioned in the experience message. The chatbot then asked an additional question on the same topic. Note that the chatbot asked the same questions under both the knowledge-based and None conditions. We set \(N=5\) in this experiment.

Fig. 2
figure 2

Structure of the system, which contains three main functions: memory, chatbot, and retrieval

Table 1 Examples of chatbot responses
Table 2 The format for memory

4.1.2 Details for Common Functions

The structure of our system is depicted in Fig. 2. We built a chatbot based on the seq2seq model [28]. We trained this model based on the NPS chat corpus of English conversations from NLTK [29]. For the preprocessing of this corpus, we separated sentences based on odd and even indices; odd sentences were used as training data, and even sentences were used as labels. Table 1 presents a dialogue example of the trained chatbot. Although the responses sometimes sounded weird, we used this chatbot to provide a short talk under all conditions.

4.1.3 Details for Experience-Based Dialogue

Because our aim was to endow a robot with the ability to share its past interaction experience, we needed to build a function to record the interaction information as the robot’s memory. The memory format is presented in Table 2. It was specifically designed for conversations on preferences. The preferences and additional information should be extracted and stored from the user’s answers to the preference and additional information questions from the robot. Note that our system can identify the polarity of the subject’s answer and add an appropriate verb: either “likes” or “dislikes.” However, the values for the user and gender required manual annotation.

Table 3 List of templates for experience-based dialogue

To compose a sentence for the experience message, we made templates as given in Table 3. First, the system chooses the experience that is about to be introduced. Then, it randomly selects a template and completes the sentence by filling in values for the date, user, preference, and additional information corresponding to the chosen experience. Note that the appropriate gender form is chosen to match the gender. We convert the recorded time into a literal manner (e.g., yesterday, two days ago, last week). The example sentences created by this process are given in the first item of the experience-based column in Table 4.

Table 4 Dialogue examples for experiment

4.1.4 Details for Knowledge-Based Dialogue

Because the knowledge-based dialogue was prepared for comparison to the experience-based dialogue, we did not build a sentence generation function for it. To balance the information involved in the knowledge message and experience message, we manually made the knowledge message corresponding to each experience message (see the first item of the knowledge-based column in Table 4).

4.2 Method

4.2.1 Subjects

Twenty-four students with a university education background and fluent in English (\(average\ age = 25.3\)) participated in this experiment. They participated in conversation trials under two of the three conditions: experience-based dialogue, knowledge-based dialogue, and None. In other words, 12 subjects participated in dialogues with an experience message and knowledge message, while the remaining 12 subjects participated in dialogues with an experience message and neither message. Note that the order of the dialogues were counter-balanced in both comparisons.

4.2.2 Apparatus

To properly balance the information conveyed by the experience message and knowledge message, we first collected experience messages and created a corresponding knowledge message. We performed an investigation on the Internet and around our university to get rough statistics about preferences among different groups of people. The results of the investigation were used to create the knowledge messages given in Table 5. The preferences and additional information were picked up from the sentence to generate an experience message. To make the sentences sound natural, we randomly chose “like” or “dislike” to follow the probabilities found in this investigation.

Table 5 Examples of knowledge messages

4.2.3 Procedure

Before starting our experiment, we explained the purpose and the procedure of the experiment to the participants. The participants agreed to participate in the experiment and filled out the consent form. The current study was approved by the ethics committee for research involving human subjects at Graduate School of Engineering Science, Osaka University. Following informed consent, the experimenter gave instructions on the experiment to the subjects and introduced the robot CommU, which was placed on a table (see Fig. 3). The experimenter told the subjects to talk casually with the robot. To start the conversation, they were asked to say “Hello” to the robot when ready. Before the conversation began, the experimenter asked the subjects to fill in a score for the level of acceptance of the robot. The dialogue described in Fig. 1 was repeated five times in total under each condition. When the conversation finished, the experimenter asked the subjects to fill in all items of the questionnaire listed in Table 6.

4.2.4 Measurement

Table 6 presents the scales used in this experiment. We used the mind perception scale [22, 30] to measure how humans attribute a mind to agents. We used the three-factor version [30], which consists of the positive experience (E+), negative experience (E-), and agency (A). The questions with E+ aim to measure the consciousness, desire, personality, pleasure, and pride. The questions with E- aim to measure embarrassment, fear, hunger, pain, and rage. The questions with A aim to measure communication, emotion recognition, joy, memory, morality, planning, self-control, and thought. Godspeed [31] includs factors such as anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety, all the factors consist of a small number of questions for evaluating agents. Felt social skill [32] was used to evaluate the sociability of the robot. In our analysis, the felt social skill was evaluated according to the all factor structure, which was extracted from all subjects who attended Naito’s study [32]. To evaluate the level of acceptance of the robot, we asked the following question before and after the experiment: “From 1 to 7, how much do you accept the robot?”

Fig. 3
figure 3

Scene of the experiment

Participants in the experience-based dialogue condition and the knowledge-based dialogue condition were also empirically observed. Specifically, we asked three people with university educations to review videos of the experiment and count the participants’ positive reactions based on their evaluations of each participant’s facial expressions (e.g., smiling or frowning) and moods reflecting each experience message or knowledge message. The results were determined by a vote, and the majority (i.e., at least two out of three votes for each reaction). decided the result. We did not conduct empirical observations of the None condition because there was no experience message or knowledge message included.

Table 6 Evaluated factors for testing hypotheses

4.2.5 Predictions

Prediction 1 The robot will score higher on agency (A), positive experience (E+), and negative experience (E-) with the experience-based dialogue condition than the other two conditions.

Prediction 2 The experience-based dialogue condition will have a higher score for the felt Social skill than the other two conditions.

Prediction 3 The experience-based dialogue condition will have higher scores for the items in Godspeed than the other two conditions.

Prediction 4: The experience-based dialogue condition will have higher scores for the level of acceptance of the robot and likeability than the other two conditions.

4.3 Results

4.3.1 None and Experience-Based Dialogue

Fig. 4
figure 4

Results under the None condition and experience-based dialogue condition: a level of acceptance, b Godspeed, c mind perception scale, and d felt social skill

Figure 4a shows the box plots for the level of acceptance before and after the experiment. There was no difference between the none (\(M=4.25\), \(\mathrm{SD}=1.60\)) and experience-based dialogue (\(M=4.16\), \(\mathrm{SD}=1.40\)) conditions (\(t(x)=-0.14\), n.s.) before the experiment. However, the experience-based dialogue condition had a higher level of acceptance (\(M=5.17\), \(\mathrm{SD}=1.59\)) than the None condition (\(M=3.42\), \(\mathrm{SD}=1.51\)) (\(t(x)=2.77\), \(p<0.05\), \(d=1.13\)) after the experiment, where d is Cohen’s d.

Figure 4b shows the box plots for the scores of anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety. Anthropomorphism was higher under the experience-based dialogue condition (\(M=2.81\), \(\mathrm{SD}=0.71\)) than the None condition (\(M=2.05\), \(\mathrm{SD}=0.60\)) (\(t(x)=2.87\), \(p<0.05\), \(d=1.17\)). Animacy was higher under the experience-based dialogue condition (\(M=3.02\), \(\mathrm{SD}=0.85\)) than the None condition (\(M=2.14\), \(\mathrm{SD}=0.55\)) (\(t(x)=3.04\), \(p<0.05\), \(d=1.05\)). Likeability was higher under the experience-based dialogue condition (\(M=3.57\), \(\mathrm{SD}=0.96\)) than the None condition (\(M=2.66\), \(\mathrm{SD}=0.79\)) (\(t(x)=2.50\), \(p<0.05\), \(d=1.32\)). Perceived intelligence was higher under the experience-based dialogue condition (\(M=3\), \(\mathrm{SD}=0.79\)) than the None condition (\(M=2.33\), \(\mathrm{SD}=0.66\)) (\(t(x)=2.23\), \(p<0.05\), \(d=0.91\)). There was no significant difference in perceived safety between the experience-based dialogue condition (\(M=2.86\), \(\mathrm{SD}=0.73\)) and None condition (\(M=2.86\), \(\mathrm{SD}=0.74\)) (\(t(x)=1.02\), n.s.).

Figure 4c shows the box plots of the factor scores for mind perception: agency (A), positive experience (E+), and negative experience (E-). E+ was higher under the experience-based dialogue condition (\(M=2.97\), \(\mathrm{SD}=0.90\)) than the None condition (\(M=2.11\), \(\mathrm{SD}=0.56\)) (\(t(x)=2.80\), \(p<0.05\), \(d=1.14\)). A was higher under the experience-based dialogue condition (\(M=3.23\), \(\mathrm{SD}=0.82\)) than the None condition (\(M=2.26\), \(\mathrm{SD}=0.57\)) (\(t(x)=3.35\), \(p<0.05\), \(d=1.36\)). E- was higher under the experience-based dialogue condition (\(M=2.67\), \(\mathrm{SD}=0.62\)) than the None condition (\(M=2.12\), \(\mathrm{SD}=0.58\)) (\(t(x)=2.23\), \(p<0.05\), \(d=0.91\)). Figure 4d shows the box plots of the factor scores for social skill. The score was higher under the experience-based dialogue condition (\(M=3.75\), \(\mathrm{SD}=1.42\)) than the None condition (\(M=2.25\), \(\mathrm{SD}=0.75\)) (\(t(x)=3.49\), \(p<0.05\), \(d=1.3\)).

4.3.2 Knowledge-Based Dialogue and Experience-Based Dialogue

Figure 5a shows the box plots of the level of acceptance before and after the experiment. There was no difference between the knowledge-based dialogue condition (\(M=4.83\), \(\mathrm{SD}=1.34\)) and experience-based dialogue (\(M=4.91\), \(\mathrm{SD}=1.68\)) conditions (\(t(x)=-0.13\), n.s.) before the experiment. However, the experience-based dialogue condition had a higher level of acceptance (\(M=6.08\), \(\mathrm{SD}=1.16\)) than the knowledge-based dialogue condition (\(M=4.41\), \(\mathrm{SD}=1.72\)) after the experiment (\(t(x)=2.77\), \(p<0.05\), \(d=1.13\)).

Fig. 5
figure 5

Results comparing the knowledge-based condition and experience-based dialogue condition: a level of acceptance, b Godspeed, c mind perception scale, and (d) felt social skill

Figrue 5b shows the box plots of the scores for anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety. Anthropomorphism was higher under the experience-based dialogue condition (\(M=3.28\), \(\mathrm{SD}=0.72\)) than the knowledge-based dialogue condition (\(M=2.23\), \(\mathrm{SD}=0.57\)) (\(t(x)=3.92\), \(p<0.05\), \(d=1.36\)). Animacy was higher under the experience-based dialogue condition (\(M=3.32\), \(\mathrm{SD}=0.76\)) than the knowledge-based dialogue condition (\(M=2.5\), \(\mathrm{SD}=0.55\)) (\(t(x)=3.04\), \(p<0.05\), \(d=1.2\)). Perceived intelligence was higher under the experience-based dialogue condition (\(M=3.52\), \(\mathrm{SD}=0.64\)) than the knowledge-based dialogue condition (\(M=2.90\), \(\mathrm{SD}=0.47\)) (\(t(x)=2.70\), \(p<0.05\), \(d=1.1\)). There was no significant difference in likability between the experience-based dialogue condition (\(M=4.07\), \(\mathrm{SD}=0.51\)) and knowledge-based dialogue condition (\(M=3.70\), \(\mathrm{SD}=0.59\)) (\(t(x)=1.62\), n.s.). There was no significant difference in perceived safety between the experience-based dialogue condition (\(M=2.97\), \(\mathrm{SD}=0.41\)) and knowledge-based dialogue condition (\(M=2.72\), \(\mathrm{SD}=0.31\)) (\(t(x)=1.67\), n.s.).

Figure 5c shows the box plots of the factor scores for mind perception: agency (A), positive experience (E+), and negative experience (E-). E+ was higher under the experience-based dialogue condition (\(M=3.29\), \(\mathrm{SD}=0.53\)) than the knowledge-based dialogue condition (\(M=2.83\), \(\mathrm{SD}=0.52\)) (\(t(x)=2.22\), \(p<0.05\), \(d=0.88\)) . A was higher under the experience-based dialogue condition (\(M=3.51\), \(\mathrm{SD}=0.38\)) than the knowledge-based dialogue condition (\(M=3.08\), \(\mathrm{SD}=0.48\)) (\(t(x)=2.48\), \(p<0.05\), \(d=1.05\)). There was no significant difference in E- between the experience-based dialogue condition (\(M=2.58\), \(\mathrm{SD}=0.47\)) and knowledge-based dialogue condition (\(M=2.3\), \(\mathrm{SD}=0.51\)) (\(t(x)=1.45\), \(p<0.05\), \(d=0.57\)).

Figure 5d shows the box plots of the factor scores for social skill. The score was higher under the experience-based dialogue condition (\(M=4.08\), \(\mathrm{SD}=1.08\)) than the knowledge-based dialogue condition (\(M=2.83\), \(\mathrm{SD}=0.93\)) (\(t(x)=3.02\), \(p<0.05\), \(d=1.23\)).

Table 7 Results of empirical observation
Fig. 6
figure 6

Results of positive user reaction decay curve

In this paragraph, we state the results of the empirical observations. In our experiment, five experience or knowledge messages were used in each conversation, which were repeated in 12 trials in the experience- or knowledge-based dialogue condition, so that the participants’ reactions to a total of 60 messages could be observed. As Table 7 shows, the experience-based dialogue (\(M=3.5\), \(\mathrm{SD}=1.09\)) obtained significantly more positive reactions from participants than the knowledge-based dialogue (\(M=2.42\), \(\mathrm{SD}=1.08\)) (\(t(x)=2.44\), \(p<0.05\), \(d=0.99\)). Moreover, Fig. 6 clearly shows that the positive reaction curve of the knowledge-based dialogue decays more quickly than that of the experience-based dialogue.

5 Discussion

5.1 Verification of Predictions

Our proposed method significantly improved the robot’s level of acceptance, anthropomorphism, animacy, likability, perceived intelligence, agency, positive experience, negative experience, and sociability compared to the None condition. This indicates that the ability to convey information about human tendencies in the form of the robot’s communication of experience is useful to make the robot outperform in the above aspects that have been of interest in the field of human–robot interaction. Meanwhile, the proposed method also scored higher in terms of the level of acceptance, positive user reaction, anthropomorphism, animacy, perceived intelligence, agency, positive experience, and sociability than the knowledge-based dialogue condition. The knowledge-based dialogue condition can convey the same information without referring to the person who gave this information to the robot. This means that the difference (i.e., applying its own experience in communication with other persons) is an effective factor for the robot to be evaluated highly according to these aspects. In other words, except for negative experience, likability, and perceived safety, the results mostly agreed with Predictions 1–4. Additionally, the results of empirical observations revealed a significant difference between the experience- and knowledge-based dialogues, which was consistent with the results of the level of acceptance. This result supports our claim that the experience-based dialogue can better engage people and has the potential for sustaining long-term interaction.

We consider our verification to be credible because we carefully designed the dialogue content and structures for a fair comparison under different conditions. We generated the robot’s utterances solely by replacing the subject words (i.e., a person or statistics) and removing the source of information (i.e., experience or knowledge) in the robot’s utterance to ensure equivalent contents and balance the experience-based dialogue and knowledge-based dialogue.

5.2 Possible Practical Applications

For applications of social robots, interaction is indispensable; a good dialogue structure that can engage users is necessary for a robot to better interact with a human. A good dialogue structure concerning an experience can enhance the user’s motivation when interacting with a robot in public places [1,2,3,4,5,6, 12, 17, 18] and help improve the robot’s internal aspects such as sociability [2], mind [13, 14], and perceived intelligence [3]. Our findings greatly contribute to human–robot conversation by helping the robot better present its internal state to cover all aspects mentioned above and improve its human likeness by using the dialogue structure to convey an experience message when communicating with humans. The proposed dialogue structure can be flexibly adopted to an experience of listening to humans. Therefore, our results suggest that a robot can use experience-based dialogue to maintain a long-term interaction.

5.3 Limitations and Future Work

The lack of positive results for likability and perceived safety may indicate that their evaluation is easily influenced by the subjects’ opinion about treatment of private information. Some people may hesitate to listen to others’ experiences or imagine that such communication conveys their private information. Thus, our results showed no significant difference for this aspect. We need to design a conversation by which the robot obtains permission or trust from a human to convey private information.

With regard to the mind aspect, we did not find a significant difference in terms of negative experience (E-). This may have been caused by the choice of contents introduced by the robot. In our experiment, we only used the robot’s experience to casually talk with a person on his or her preferences. In other words, there were few chances for the robot to mention to any serious facts or events involving negative aspects.

However, if robots start being frequently used in the real world, they may have opportunities for many other types of experiences, such as observing a person’s dialogue without participating, observing a person’s behavior (e.g., walking, watching, working), and observing objects around the robot. These individual activities may give robots more opportunities to express their personalities or emotions; for example, the robot can say “I found everyone too busy to talk to me. I feel so bored.” In short, an experience message about these types of experiences would imply multimodal observation capabilities for how the robot perceives the world, as well as milder or less positive attitudes toward humans that may better simulate humanlike behavior involving negative experiences in a real human society.

In this study, we did not complete the entire human–robot dialogue system for experience-based dialogue; rather, we simply verified and studied how the robot performs in sharing experiences. An important part of our future work is that we need to establish a robot’s memory mechanism to remember the experience shared by users. Then, the robot could collect real experiences from human–robot interaction and mention them based on such experience-based dialogues.

6 Conclusions

At present, more social robots are becoming involved in nursing or companionship. In most cases, people are curious about the robot and interact enthusiastically at first but easily lose interest after a while. These social positions require robots to present their abilities and internal aspects during interaction to obtain a better evaluation from people. Robots can utilize empathy to induce people to infer the robots’ abilities, social relationships, personalities, etc.

In this paper, we drew from the literature in psychology and linguistics to construct the experience-based dialogue structure, where a piece of knowledge and a story about how the robot gained the knowledge are used to compose the robot’s utterance and help it present its internal aspects. Our results showed that experience-based dialogue can improve the evaluation of the robot in terms of the perceived intelligence, sociability, mind, anthropomorphism, animacy. Moreover, based on the higher scores for the level of acceptance and the numbers of positive user reactions, a robot with experience-based dialogue would be better able to maintain a long-term interaction.