Abstract
It has been well documented that children perceive robots as social, mental, and moral others. Studies on child-robot interaction may encourage this perception of robots, first, by using a Wizard of Oz (i.e., teleoperation) set-up and, second, by having robots engage in self-description. However, much remains unknown about the effects of transparent teleoperation and self-description on children’s perception of, and relationship formation with a robot. To address this research gap initially, we conducted an experimental study with a 2 × 2 (teleoperation: overt/covert; self-description: yes/no) between-subject design in which 168 children aged 7–10 interacted with a Nao robot once. Transparency about the teleoperation procedure decreased children’s perceptions of the robot’s autonomy and anthropomorphism. Self-description reduced the degree to which children perceived the robot as being similar to themselves. Transparent teleoperation and self-description affected neither children’s perceptions of the robot’s animacy and social presence nor their closeness to and trust in the robot.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Social robots, which are designed to interact socially with people (Breazeal et al. 2016), are becoming increasingly present in both personal and professional domains (e.g., Lutz et al. 2019). As a result, it can be expected that, in the near future, human-robot interaction (HRI) will more frequently occur and the emergence of human-robot relationships will become more common (Edwards et al. 2019). Children, in particular, have a strong tendency to relate socially to non-human entities (Epley et al. 2007). Although child-robot relationships are thus likely to emerge, it remains unclear to what degree these social bonds will resemble children’s relationships with people, pets, and devices (Kory Westlund et al. 2018). That is, research has demonstrated that children’s hybrid conceptualization of social robots overlaps, but does not entirely coincide, with their conceptualizations of humans, animals, or objects (Kahn et al. 2013). At the same time, children do perceive robots to be social others that could potentially be their friends; mental others that are intelligent and emotional; and partly moral others that deserve to be treated fairly (Kahn et al. 2012). Social robots may thus have several practical applications. They could, for instance, accompany hospitalized children in times at which no human presence is allowed (e.g., during radiation therapy; Ligthart et al. 2019a, b), or support diabetic children in self-managing their condition (e.g., Baroni et al. 2014).
At the same time, there are concerns about how children’s perception of robots as social, mental, and moral others may be encouraged by the way in which robots are presented to children. First, questions have been raised about the potentially ‘deceptive’ nature of the often-employed Wizard-of-Oz (WOZ) set-up, in which a robot is being remotely controlled during the interaction (for a discussion, see Kory Westlund and Breazeal 2015). Social robots are currently still rather limited in autonomously interacting with people, and children in particular, in a both socially advanced and technologically reliable manner (e.g., Tolksdorf et al. 2020; van den Berghe et al. 2019). Therefore, child-robot interaction (CRI) studies often rely upon the WOZ set-up (Kory Westlund and Breazeal 2016; van Straten et al. 2020b). Robots’ limited social capacities can in this way be overcome to some extent (e.g., Stower et al. 2021), giving children “the impression that they are interacting with a [robot] that understands [them] as well as another human would” (Kelley 1984, p. 27) and that appears to be autonomous and thus qualifies for some degree of moral standing and accountability (Johnson 2011; Neeley 2014).
A second concern about current social robots centers on the presentation of robots to children with a backstory that does not accurately reflect their mechanical nature, thus possibly strengthening children’s social behavior and feelings toward them (see Kory Westlund and Breazeal 2019b). For example, in many CRI studies in which (humanoid) robots interact with children verbally, robots engage in self-description, usually by referring to themselves in the first-person and telling children about themselves. This act of self-description implies that a robot possesses knowledge about itself, or that it has a mental representation of its ‘self’ (Lewis 2011). Moreover, when a robot shares information about itself, children may get the impression that the robot is a social actor with a personality of its own (Kory Westlund and Breazeal 2019b; Ligthart et al. 2020), which may lead to the ascription of traits, dispositions, and capacities to the robot (Epley and Waytz 2010). The attribution of a personal identity, in turn, entails that the robot is autonomous (Wrigley 2007) and, thus, morally accountable (Johnson 2011; Neeley 2014).
If we knew how child-robot relationships emerge when robots are not presented as more advanced than they currently are, new light would be shed on the societal and ethical discussion surrounding the topic. However, current research is inconsistent about the effects of presenting robots as they are, for example by making children aware of robots’ remotely controlled nature (e.g., Cameron et al. 2017; de Haas et al. 2016; Tozadore et al. 2017; Turkle et al. 2006). In addition, it remains unclear how a robot’s self-description (i.e., conveying self-related information from a first-person perspective) affects children’s relationship formation with a robot as well as their perceptions of it.
Leite and colleagues (2017,2016) compared, in two studies, how children in the age ranges of 4–6 and 7–10 respectively responded to a social robot. These studies consistently showed children in the older age group to be more critical of, and sensitive to, aspects of the robot’s communication (Leite et al. 2017; Leite and Lehman 2016). Indeed, children in middle childhood (i.e., 6–12 years of age; Cole et al. 2005) become increasingly sensitive to social conventions and discourse flexibilities (e.g., Stafford 2004). In addition, they become increasingly able to discern facts from fiction (e.g., Stafford 2004). Against this background, a robot’s self-description and transparent teleoperation may affect how children in middle childhood perceive and relate to a social robot.
Both self-description and transparency about the teleoperation procedure tap into the robot’s status as more or less of a ‘self’: an entity that controls its own actions and has its own unique backstory. They may influence children’s perceptions of social robots and their relationship formation with them independently of each other, but they may also interact: The effects of self-description may depend on whether children are informed in a transparent way about the robot. We therefore studied, in a two-factorial experiment among children aged 7–10 years, whether (a) being transparent about the WOZ set-up before an interaction and (b) a robot’s engagement in self-description affect children’s perception of, and relationship formation with, a humanoid robot. Research on children’s interactions and relationship formation with social robots is still in an early stage (e.g., Peter et al. 2019; Stower et al. 2021) and modeling relationships between various CRI-related concepts requires more basic, preparatory study (e.g., Oliveira et al. 2021; van Straten et al. 2020b). Hence, it seems too early for well-founded predictions about complex interrelationships between children’s robot perceptions and child-robot relationship formation and for fielding the pertinent empirically more demanding studies. We, therefore, decided to initially focus on studying the direct effects of transparency and self-description on children’s perceptions of, and relationship formation with a social robot.
2 Theoretical framework
2.1 (Transparent) teleoperation in CRI
Several CRI studies have been transparent to children about the WOZ procedure either by informing children about the teleoperation set-up or by demonstrating it to them. In a study among 8- to 13-year-olds, Turkle et al. (2006) found that informing children, after their interaction with a humanoid robot, about the robot’s teleoperated working neither influenced children’s perception of the robot as alive, intelligent, emotional, and humanlike, nor their sense of relationship with it. Tozadore and colleagues (2017), in contrast, reported that children (aged 7–11) perceived a humanoid robot to be less intelligent after hearing that it had been remotely controlled during their conversation with it. When, in yet another study, Cameron et al. (2017) overtly activated a humanoid robot’s emotional expressions by pressing a button on the robot’s chest, children younger than 6 years of age perceived the robot as machine- rather than person-like. Yet, children older than 6 years of age considered the robot a machine regardless of its apparent autonomy (Cameron et al. 2017).
Likewise, De Haas et al. (2016) found that 7- to 8-year-old children’s perceptions of, and behavior toward, a humanoid robot did not differ between conditions in which it functioned autonomously or was being remotely controlled. However, De Haas et al. (2016) did not actively bring the teleoperation procedure to children’s attention and it may have gone unnoticed. This idea is supported by an exploratory study on child-computer interaction, which found that children (aged 12–13) were generally unaware of a teleoperator’s presence (Read et al. 2005). Finally, three studies used robots incapable of verbal interaction. After watching a robotic dog perform a series of movements, children aged 5–7 attributed less physical and emotional sentience as well as less moral standing to the robot in case its movements had been overtly activated by an experimenter (Chernyak and Gary 2016). Similarly, children aged 4–5 were less convinced of a mechanomorphic robot’s memory and vision when its movements had been overtly teleoperated, while their belief in its animacy remained intact (Somanader et al. 2011). Yet, Bumby and Dautenhahn (1999) reported that children aged 7–11 ascribed free will to a mechanomorphic robot and continued to anthropomorphize it after seeing a controlling program being downloaded onto the robot.
Some of the abovementioned studies suggest that transparency about the WOZ set-up is effective in changing children’s robot perception. However, the findings of the studies are difficult to compare. Moreover, while Cameron et al. (2017) shed light on children’s categorization of humanoid robots as person- or machine-like, more detailed insights into the effects of transparent teleoperation on children’s perception of, and relationship formation with, such robots is currently lacking. In addition, it remains unknown whether informing children about a humanoid robot’s teleoperated working before their interaction with the robot is effective (see Kory Westlund and Breazeal 2016, for a research proposal).
Regardless of whether transparency about the WOZ set-up is effective or not, giving children the impression that a robot functions autonomously while it is actually being teleoperated may raise ethical questions (see Kory Westlund and Breazeal 2015, 2016). For instance, Scheutz (2012) has outlined that perceived autonomy is crucial to the perception of robots as social, humanlike agents. He emphasizes that robots are no agents and argues that ‘falsely pretending’ the opposite may lead to the emergence of ‘unidirectional emotional bonds’ that may have negative consequences for the human (see Scheutz 2012). Indeed, agency plays an important role in mind perception (see Gray et al. 2007), which largely determines whether we attribute humanlike characteristics to, and form humanlike relationships with, nonhuman others (Epley and Waytz 2010). From both empirical and normative viewpoints, it thus appears timely to investigate whether transparency about a robot’s teleoperated working might alter children’s perception of the robot as well as their relationship formation with it.
Transparency about the WOZ set-up may affect at least five concepts relevant to children’s perception of robots as social, mental, and moral others. First, transparent teleoperation may encourage children to think of a robot as an object rather than an ‘other’, thus affecting their perception of the robot’s animacy (i.e., the “perception of life”, Bartneck et al. 2009, p. 74). Second, transparency may influence their perception of the robot’s autonomy, or “the degree to which the decision-making process used to determine how [its goals] should be pursued, is free from intervention by any other agent” (Barber et al. 2000, p. 133). Third, children’s anthropomorphic thinking, or “the tendency to imbue the real or imagined behavior of nonhuman agents with humanlike characteristics, motivations, intentions, or emotions” (Epley et al. 2007, p. 864) may be affected. Anthropomorphism, fourth, interacts with a social presence, or the degree to which the artificiality of a robot goes unnoticed (Lee 2004), and is influenced, fifth, by the degree to which a robot is perceived to be similar to the self (Ames 2004; Epley et al. 2007).
As children’s reasoning about humans and robots overlaps (Kahn et al. 2012, 2013), the literature on interpersonal relationship formation is useful to determine concepts that are relevant to the emergence of a social relationship between a child and robot. Feelings of closeness and trust seem of primary interest here: They develop interdependently and are central to both the emergence of interpersonal relationships in general (Berscheid and Regan 2005) and children’s friendships in particular (Bauminger-Zviely and Agam-Ben-Artzi 2014). Closeness constitutes a feeling of intimacy or connectedness that may develop into friendship (Sternberg 1987). Trust, in turn, has been defined as the belief in another person’s benevolence and honesty (Larzelere and Huston 1980).
In line with Kahn et al. (2012), we consider children’s ratings of a robot’s animacy and social presence as indicative of their perception of the robot’s social otherness; children’s ratings of a robot’s anthropomorphism and similarity to themselves as informative about their perception of its mental otherness; and children’s ratings of a robot’s autonomy as providing insight into children’s perception of the robot as a moral other. While Kahn et al. (2012) see child-robot relationship formation as an inherent aspect of children’s treatment of robots as social others, we distinguish between children’s perceptions of, and relationship formation with social robots as distinct processes. Considering someone a social entity does not automatically entail that one considers this person to be a (potential) friend. In a similar fashion, we hold that perceiving a robot as a social other and experiencing a relationship with it are separate things.
A recent study found that children’s awareness of a social robot’s lack of humanlike psychological capacities (i.e., capacities of ‘mental others’) decreased children’s ratings of a robot’s animacy, anthropomorphism, social presence, and perceived similarity to the self, as well as children’s trust in the robot (van Straten et al. 2020c). Thus, informing children about a robot’s technological rather than human like status alters children’s robot perceptions and affects child-robot relationship formation at least partially. As a robot’s teleoperated working implies that it is not autonomous, we additionally expect that transparency about the teleoperation procedure will decrease children’s perceptions of the robot’s autonomy. Finally, because friendships can be understood as relationships that arise between equal, autonomous entities (Emmeche 2014), “that one chooses to enter—and can choose to leave” (Keller 1997, p. 159), we expect that children’s feelings of closeness toward, and trust in, a robot will decrease when they realize that the robot is being remotely controlled. This expectation receives support from a qualitative, exploratory study among children in middle childhood, which reports that children sometimes based their level of interpersonal trust in a social robot upon their belief in its technological capacities (van Straten et al. 2018). In addition, a recent study on children’s first impressions of a robot’s trustworthiness found that children’s perception of a social robot’s competence predicted their level of trust in the robot (Calvo-Barajas et al. 2020). In summary, we, therefore, hypothesized that:
-
Hypothesis 1a (H1a) Transparency about the teleoperation procedure decreases children’s ratings of a robot’s animacy, autonomy, anthropomorphism, social presence, and perceived similarity.
-
Hypothesis 1b (H1b) Transparency about the teleoperation procedure decreases children’s feelings of closeness toward and trust in a robot.
2.2 Self-description: Telling you about me
For interpersonal relationships to emerge and develop, it is crucial that interactants provide each other with information about themselves (see Roloff 1976). As described in Berger and Calabrese’s (1975) Uncertainty Reduction Theory, this is especially important in initial interactions and early stages of relationship formation, in which the mutual seeking and sharing of self-related information can decrease people’s uncertainty about each other. In its most basic form, sharing self-related information implies self-description, or the act of sharing factual information about oneself (see Culbert 1967, as cited in Gilbert 1976). While relationship formation generally benefits most from the sharing of increasingly intimate information (Gilbert 1976), the importance of intimacy to friendships still develops during primary school years (e.g., Furman and Bierman 1984; Laursen and Hartup 2002), and continues to increase during adolescence (e.g., Bauminger et al. 2008; Berndt 2004). Hence self-description may suffice in the emergence of children’s early friendships, whether with peers or with robots.
Accordingly, CRI research has suggested that a robot’s engagement in self-description fosters child-robot relationship formation (Kanda et al. 2007; Ligthart et al. 2019a, b; Shiomi et al. 2015; van der Drift et al. 2014).Footnote 1 However, these studies did not focus on self-description as an isolated feature of CRI. It is, therefore, difficult to disentangle the effects of self-description in particular from the effects of the robots’ behavior more generally. Moreover, none of the studies adopted an experimental design with self-description as the independent variable, which impairs causal conclusions.
Self-description can be operationalized as self-reference through the use of first-person pronouns when sharing factual, self-related information (see Curtis 1981, for a similar approach in a related context). When referring to the self as an “I”, the provided information is framed as a backstory unique to the one who is speaking. The adoption of a third-person perspective, in contrast, will turn the information into a general description of a larger group of entities (e.g., people, robots). Findings on the effects of pronoun use by nonhuman entities in human-nonhuman communication are mixed. For instance, Brennan and Ohaeri (1994) found that when a computer agent used first-person pronouns to refer to itself, people used more politeness markers (e.g., please, thank you) and were more likely to use the pronoun “you” to refer to the agent in their responses. This may indicate that they considered the computer agent more of a social agent when it used first-person pronouns (Brennan and Ohaeri 1994). Yet, a study on the effects of personal formulations (i.e., containing pronouns) used by a voice agent found no effects on people’s evaluations of the agent’s humanlikeness (Kruijff-Korbayová et al. 2008). In a CRI context, Kory Westlund and colleagues (2016) found that children’s self-reported robot perceptions were unaffected by an experimenter’s reference to the robot as “the robot” versus “a friend” and using the third-person “it” versus second-person pronouns, a subtle difference in children’s gaze patterns notwithstanding.
However, when a robot itself uses first-person pronouns (or not), different findings may emerge than when an experimenter varies the use of pronouns when describing a robot (as in Kory Westlund et al. 2016). Moreover, the aforementioned studies investigated the effects of pronoun use but did not employ the use of pronouns as a means to operationalize self-description. In an experimental study among adults, Eyssel et al. (2017) found that people’s evaluations of a robot were not affected by a robot’s self-description (unless controlling for individual differences in the tendency to anthropomorphize, which revealed a significant effect of self-description on mind attribution). However, in this study, self-description was not operationalized as pronoun use. As Nass and Brave (2005, p. 115) argue, “[w]hen a person avoids the use of I, there must be a reason [and] when personhood is in question, the use of I can resolve the ambiguity”. They add that not using first-person pronouns when speaking about oneself communicates that one does not have “full human status” (Nass and Brave 2005, p. 115). As a consequence, a robot’s avoidance of the use of personal pronouns may decrease children’s perception of the robot as a social, mental, and moral other. Given the centrality of sharing self-related information and reducing the other’s uncertainty about the self to the emergence of interpersonal relationships, self-description—operationalized as self-reference through the use of first-person pronouns—may also affect child-robot relationship formation. Therefore, our second hypothesis predicted:
-
Hypothesis 2a (H2a) A robot’s engagement in self-description increases children’s ratings of the robot’s animacy, autonomy, anthropomorphism, social presence, and perceived similarity.
-
Hypothesis 2b (H2b) A robot’s engagement in self-description increases children’s feelings of closeness toward and trust in the robot.
While research on the topic is scarce, the findings presented in an unpublished study by Huang and colleagues (2001), which is available online and cited in Nass and Brave (2005), indicate that the degree to which an artificial entity is perceived to be humanlike may also influence people’s responses to its use of first-person pronouns. Huang et al. (2001) found that people felt comfortable with a recorded, but not with a synthetic (i.e., non-human, artificial) voice engaging in self-reference using the pronoun “I”. Moreover, their trust in a synthetic voice system decreased when it referred to itself by saying “I” (Huang et al. 2001). Thus, the type of voice (recorded vs. synthetic) interacted with pronoun use in that the effects of the system’s use of first-person pronouns were dependent upon the implemented type of voice. As Nass and Brave (2005, p. 119) put it, when the system used a synthetic voice, its use of pronouns was considered an “attempt to claim humanity” that caused suspicion, leading to negative evaluations of the system.
Huang et al. (2001) studied pronoun use to establish (im)personal formulations rather than self-description, but a similar interaction effect may occur in the present study: When the teleoperation procedure is transparent, children may feel that the robot’s self-description is out of place because the robot is, in fact, not an independent entity (i.e., not a ‘self’). This discrepancy may further increase children’s awareness of the robot’s inanimate, machinelike status. In terms of child-robot relationship formation, when children know the robot is being remotely controlled, they may understand that the robot’s engagement in self-description is unspontaneous and, therefore, less meaningful. In contrast, when children are not aware of the teleoperation procedure, it may appear to them as if the robot chooses to tell them something about itself. This impression may give children the feeling that the robot is actually invested in the process of getting to know each other, which may be beneficial to their experience of the robot as a potential friend. Therefore, our third hypothesis predicted that the effect of a robot’s engagement in self-description on children’s perception of, and relationship formation with, the robot is moderated by transparency about the teleoperation procedure:
-
Hypothesis 3a (H3a) When the teleoperation procedure is transparent, as opposed to when it is not transparent, the robot’s engagement in self-description will decrease children’s ratings of the robot’s animacy, autonomy, anthropomorphism, social presence, and perceived similarity.
-
Hypothesis 3b (H3b) When the teleoperation procedure is not transparent, the positive effect of the robot’s engagement in self-description on closeness and trust will be stronger than when the teleoperation procedure is transparent.
3 Methods
We conducted a two-factorial experiment with teleoperation (overt/covert) and self-description (operationalized as self-reference through personal pronouns) as between-subject factors. Before we started the data collection, we obtained ethical approval for carrying out this study from the Ethics Review Board of the Faculty of Social and Behavioral Sciences of the University of Amsterdam.
3.1 Participants
We collected data at four primary schools across the Netherlands. We asked for active written consent from the schools as well as from children’s parents. On the parental consent form, parents were asked to report whether their child had any medical condition. In an accompanying letter that informed parents about the study, we explained that although all children would be welcome to participate, data from children with medical conditions that could interfere with the study’s scientific goals would be excluded from analyses.
We were able to collect data from 172 children in the age range of 7–10 years. The data of four children were excluded from analyses as they had participated in an earlier data collection of ours (one child); did not properly understand the questionnaire procedure (one child); or were diagnosed with Autism Spectrum Disorder (ASD). We excluded the data of children with ASD because, first, these children tend to experience difficulties with respect to social interactions (American Psychological Association 2013) and relationships (e.g., Eisenmajer et al. 1996), and, second, ASD seems to be associated with atypical anthropomorphic reasoning (Epley et al. 2007). Therefore, we analyzed the data of 168 children (74 male, 94 female, Mage = 9.02, SDage = 0.71), who had been randomly assigned to the four experimental groups. We found no significant differences in age, F(3, 164) = 0.930, p = .428, or biological sex, χ2 (3, N = 168) = 0.193, p = .979, across the groups, which indicates that the randomization procedure was successful. Occasionally, a few children indicated that they did not know how to answer particular items of the questionnaire, resulting in missing values. These children were excluded from the analysis of the respective measure.
3.2 Interaction task and manipulation
Each child engaged in one short interaction with the Nao robot (Softbank), during which they asked the robot eight pre-determined questions (e.g., “Are you a boy or a girl?”, “Do you ever get tired?”) from a question sheet. In case children had difficulty reading the questions, the experimenter helped them out. During previous data collections (e.g., van Straten et al. 2020c) children had often tried to ask the robot questions, which we used as inspiration while designing the current interaction scenario.
We made sure that the robot did not engage in any behaviors that could influence children’s perceptions of the robot as alive or humanlike beyond our experimental manipulations. The robot, therefore, did not conform to any social conventions (e.g., greetings, listener responses) and stood completely still without blinking throughout the interaction. In the overt teleoperation condition, the experimenter told the child, prior to the interaction, that she would control the robot from a laptop. She explained that Nao could not talk with the child on its own, that she had to press a button upon every question to make the robot respond, and that the child could only pose the first question after she had started up a computer program containing the answers. To children assigned to the covert teleoperation condition, the experimenter said that as the questions were provided on the question sheet, they would not be in need of her help during the interaction, such that she would go and do something else for a while. The explanations that were provided across the groups were matched in length.
In the self-description condition, the robot referred to itself by using the personal pronoun “I” when answering the child’s questions (e.g., “I’m not a boy, but also not a girl: I’m just a robot”), while in the condition in which the robot did not self-describe, it only referred to robots in general (e.g., “Robots are no boys but also no girls: Robots are just robots”). Apart from the difference in pronoun use and some minor, unavoidable adjustments, the robot’s answers were identical across conditions.
3.3 Procedure
Before taking the first child to the experimental room, the experimenter explained the study procedure to the children at class level. She showed the children a picture of the robot and explained, in age-appropriate language, that participation could be stopped any moment without justification. Furthermore, children were assured that their data would be stored and processed such that others could not find out who had given which answers. They were given the opportunity to ask additional questions about any aspect of the study procedure. Answers to questions that could influence the findings were postponed until the debriefing.
Once everything was fully clear to the children, children came to the experimental room one by one, where the experimenter awaited them. The robot was activated before children’s arrival. The experimenter asked the child to sit on the floor in front of the robot, indicating that the child could freely determine how close to the robot s/he would like to sit. The experimenter sat down next to the child and asked explicitly whether the child would still like to participate, reminding him/her that the interaction could always be stopped at any point in time.
Upon an affirmative answer from the child, the experimenter handed him/her the question sheet and explained that s/he could ask the questions, one by one and in the right order, to the robot. She told the child that the robot’s name was Nao, and that once Nao would have answered all questions, she would have some questions for the child (i.e., the questionnaire). In the overt teleoperation condition, she then explained the WOZ procedure. In the covert teleoperation condition, she told the child that she would go and do something else. The child was asked to save any questions that were not on the question sheet for later. Once the child understood the procedure, the experimenter took place behind the laptop (see Fig. 1 for a picture of the experimental setting).
When the robot had answered the last question, the experimenter put it in a stand-by modus (i.e., seated position) and asked the child to join her at a table. After filling in some demographics, the experimenter explained the questionnaire procedure, introducing children to the answer scale and familiarizing them with the question format through several practice items (e.g., “I like French fries”; the familiarization phase was inspired by Leite et al. (2017)). Once the child had indicated to be ready for starting the questionnaire, the experimenter presented them with a series of questions tapping into their perception of, and relationship formation with, the robot. The questionnaire ended with a treatment check, which consisted of two semantic differentials. This answer format was explained to the children before they answered the semantic differentials.
When the experimenter asked the child to return to his/her classroom and call the next child, she asked them not to discuss the content of the interaction and/or questionnaire with other children until the debriefing. When all children had finished their participation, they were debriefed at class-level (see Schadenberg et al. 2017, for a similar approach). The experimenter informed children about the robot’s mechanical nature and working and explained the pre-programmed nature of the interaction using a screenshot of a Choregraphe program as an example. She pointed out some differences between robots and humans (i.e., current robots’ lack of truly human capacities). To children who had been exposed to the covert teleoperation condition, she revealed that she had controlled the robot from a distance. Judging from their surprise, children in the overt teleoperation condition had kept this information a secret. The experimenter explained why she had told some children but not others about the WOZ procedure in advance. She also indicated that while the robot had said almost exactly the same things to each child, there was one more difference: To some, the robot had referred to itself saying “I”, while to others, it had exclusively talked about “robots” in general. The purpose of this manipulation, too, was explained. To finish the debriefing, children were allowed to pose any remaining questions.
3.4 Measures
The questionnaire consisted of closed-ended questions and used a five-point Likert response scale (see Appendix A). The answer options ran from (1) “does not apply at all” to (5) “applies completely”, and their meaning was illustrated by bars of increasing height that did, however, not contain any indication as to the desirability of the answer options (e.g., colors, smileys; see Severson and Lemm 2016 for the original visual response scale). The suitability of the answer scale for children in this age range was confirmed in earlier data collections (de Jong et al. 2020; van Straten et al. 2020a, c).
The questionnaire first tapped into children’s perceptions of the robot’s animacy, anthropomorphism, social presence, and similarity to themselves. Subsequently, children’s feelings of closeness toward and trust in the robot were assessed, followed by a measure of perceived autonomy and, finally, the treatment check (see Appendix B for the items used to measure each concept). The measures were ordered such that earlier ones would minimally influence the later ones. In contrast to the other perception measures, the measure of perceived robot autonomy was placed toward the end of the questionnaire. Perceived robot autonomy was a new measure and we tried to avoid that potential confusion arising from it would affect children’s performance on the other measures. The one-factorial structure of the measures of animacy, anthropomorphism, social presence, perceived similarity, closeness, and trust was confirmed in earlier studies (van Straten et al. 2020a, c).
3.4.1 Animacy
We assessed animacy through a four-item scale inspired by two measures of the concept that were used among adults (Bartneck et al. 2009; Ho and MacDorman 2010). We performed a factor analysis (principal axis factoring, direct oblimin rotation; the same procedure was used for all scales) that confirmed the one-factorial structure of the scale, which explained 33% of the variance. One item (i.e., “Nao can die”) only had a factor loading of .134, which resulted in low internal consistency of the scale (α = .58). Removing the item increased the internal consistency to α = .69. We thus performed our analyses using a three-item version of the measure as it was originally administered. An index score of animacy was computed by averaging the remaining items (M = 2.95, SD = 0.91, skewness = − 0.386, kurtosis = − 0.131).
3.4.2 Autonomy
Based on two measures of (robot) autonomy used among adults (Rijsdijk and Hultink 2003; Rosenthal-von der Pütten et al. 2017), we developed a five-item measure to assess this concept in a CRI context. The items tapped both into autonomy itself and the moral accountability resulting from this notion.Footnote 2 The five items loaded onto one factor that explained 37% of the variance, and the scale had good internal consistency (α = .72). An index score of autonomy was computed by averaging the items (M = 2.81, SD = 0.90, skewness = 0.013, kurtosis = − 0.499).
3.4.3 Anthropomorphism
Anthropomorphism was measured using a four-item scale that was based on the technology dimension of the Individual Differences in Anthropomorphism Questionnaire-Child Form (IDAQ-CF) as presented by Severson and Lemm (2016). The one-factorial structure of the scale was confirmed for the present sample and explained 26% of the variance. There was one item with a low factor loading (i.e., “Nao knows that Nao is a robot”, factor loading .186). As a consequence, the internal consistency of the scale was low (α = 54). Because removing the item did not substantially increase internal consistency, we maintained the original scale, for which an index score was computed by averaging the items (M = 3.46, SD = 0.73, skewness = − 0.283, kurtosis = − 0.421).
3.4.4 Social presence
To assess social presence, we used a four-item scale inspired by an adult measure presented by Heerink and colleagues (2010). The factor analysis confirmed that the items loaded onto one factor that explained 60% of the variance. The scale was internally consistent (α = .86). We averaged the items to compute an index score of social presence (M = 3.87, SD = 0.88, skewness = − 0.745, kurtosis = 0.333).
3.4.5 Perceived similarity
Perceived similarity was assessed through a four-item scale adapted from the attitude dimension of McCroskey et al.’s (1975) perceived homophily measure. The items loaded onto one factor explaining 41% of the variance, and the scale had good internal consistency (α = .72). We computed an index score of perceived similarity by averaging the items (M = 2.40, SD = 0.74, skewness = 0.187, kurtosis = − 0.149).
3.4.6 Closeness
We measured closeness using a five-item scale that we developed for use in CRI settings and validated among children aged 8–9 years old (van Straten et al. 2020a). The one-factorial structure of the scale explained 52% of the variance in the present sample and internal consistency was good (α = .84). An index score of closeness was computed by averaging the items (M = 3.88, SD = 0.72, skewness = − 0.484, kurtosis = 0.568).
3.4.7 Trust
Trust was assessed through a four-item scale based on a measure by Larzelere and Huston (1980). The factor analysis confirmed the one-factorial structure of the scale that explained 46% of the variance. The scale was internally consistent (α = .74). We computed an index score of trust by averaging the items (M = 4.28, SD = 0.61, skewness = − 0.694, kurtosis = − 0.172).
3.4.8 Treatment check
The treatment check consisted of two seven-point semantic differentials, the first tapping into the robot’s self-description and the second addressing its teleoperation. The first item asked children to indicate whether the robot had talked about itself (left-hand extreme; this answer option corresponded to a score of 1) or about other robots (right-hand extreme, corresponding to a score of 7; M = 3.05, SD = 1.97, skewness = 0.576, kurtosis = − 0.832). The second item asked whether, when she took place behind the laptop, the experimenter had said that she would go and control the robot (score 1) or do something else (score 7; M = 3.70, SD = 2.77, skewness = 0.208, kurtosis = -1.845).
3.5 Analytical approach
The data were analyzed using SPSS Statistics (version 25) and were considered to be normally distributed when skewness and kurtosis ranged between – 2 and 2 (George and Mallery 2010). This was confirmed for all dependent variables. We conducted a series of ANOVAs to test the treatment check and hypotheses. The assumption of homoscedasticity was only violated for the treatment check. We, therefore, consulted the parameter estimates with robust standard error for the latter (using the heteroscedasticity-consistent standard error HC3; Hausman and Palmer 2012). As both significance tests provided the same results, we report only the results of the ANOVAs. We initially controlled for potential influences of school and errors in the teleoperation procedure (e.g., ill-timed robot responses, premature activation of the standby mode) in the analyses. As the results of the ANOVAs mirrored the outcomes of the analyses with the two control variables, we only report the results of the model without the control variables.
4 Results
4.1 Treatment check
Children who were exposed to the self-description condition indicated more often that the robot had talked about itself (M = 1.81, SD = 1.15) than children in the condition in which the robot talked about robots in general (M = 4.34, SD = 1.81), F(1, 164) = 116.267, p < .001, part. η2 = .42 . Children in the overt teleoperation condition indicated more often that the experimenter had said that she would go and control the robot (M = 1.53, SD = 1.28) than children in the covert teleoperation condition (M = 6.03, SD = 1.91), F(1, 162) = 320.357, p < .001, part. η2 = .66. No interaction effects between teleoperation and self-description were found. Thus, the treatment check was successful.
4.2 Tests of hypotheses
Table 1 provides an overview of the means and standard deviations for each of the two factors. In addition, the means and standard deviations for each of the four experimental groups can be consulted in Appendix C. According to H1, transparency about teleoperation would affect children’s perception of (H1a) and relationship formation with (H1b) the robot such that in the overt teleoperation condition, children would rate the robot lower in animacy, autonomy, anthropomorphism, social presence, and perceived similarity, and report less closeness and trust. As to children’s robot perceptions, children in the overt teleoperation condition perceived the robot to be less autonomous (M = 2.54, SD = 0.89) than did children in the covert teleoperation condition (M = 3.08, SD = 0.83), F(1, 164) = 17.416, p < .001, part. η2 = .10. In addition, overt teleoperation led children to rate the robot lower in anthropomorphism (M = 3.25, SD = 0.70) than covert teleoperation (M = 3.68, SD = 0.69), F(1, 164) = 15.682, p < .001, part. η2 = .09.
However, transparency about the teleoperation procedure had no effect on children’s perceptions of the robot’s animacy, F(1, 163) = 0.158, p = .692, part. η2 = .00, social presence, F(1, 164) = 1.130, p = .289, part. η2 = .01, and perceived similarity, F(1, 164) = 0.682, p = .410, part. η2 = .00. As to child-robot relationship formation, we found no differences in children’s feelings of closeness, F(1, 164) = 0.218, p = .641, part. η2 = .00, or trust, F(1, 164) = 2.318, p = .130, part. η2 = .01, across teleoperation conditions. Thus, H1a was partly supported, while H1b was not supported.
According to H2, self-description through the use of personal pronouns would increase children’s perceptions of the robot’s animacy, autonomy, anthropomorphism, social presence, and similarity to themselves (H2a), and strengthen their feelings of closeness toward and trust in the robot (H2b). In contrast to our expectation, children in the self-description condition perceived the robot to be less similar to themselves (M = 2.28, SD = 0.73) than children in the condition without self-description (M = 2.52, SD = 0.72), F(1, 164) = 4.609, p = .033, part. η2 = .03. Self-description had no effect on perceived animacy, F(1, 163) = 2.741, p = .100, part. η2 = .02, autonomy, F(1, 164) = 3.282, p = .072, part. η2 = .02, anthropomorphism, F(1, 164) = 0.258, p = .612, part. η2 = .00, and social presence, F(1, 164) = 0.142, p = .707, part. η2 = .00, and failed to affect child-robot relationship formation in terms of closeness, F(1, 164) = 1.534, p = .217, part. η2 = .01, and trust, F(1, 164) = 2.134, p = .146, part. η2 = .01. Thus, neither H2a nor H2b were supported.
Finally, H3 predicted that self-description would decrease, instead of increase, children’s ratings of the robot’s animacy, autonomy, anthropomorphism, social presence, and perceived similarity in the overt teleoperation condition (H3a), and increase children’s closeness to and trust in the robot more strongly in the covert than in the overt teleoperation condition(H3b). Neither H3a nor H3b were supported: We found no interaction effects on animacy, F(1, 163) = 3.384, p = .068, part. η2 = .02, autonomy, F(1, 164) = 0.030, p = .862, part. η2 = .00, anthropomorphism, F(1, 164) = 0.097, p = .756, part. η2 = .00, social presence, F(1, 164) = 2.982, p = .086, part. η2 = .02, perceived similarity, F(1, 164) = 0.428, p = .514, part. η2 = .00, closeness, F(1, 164) = 0.010, p = .921, part. η2 = .00, or trust, F(1, 164) = 0.264, p = .608, part. η2 = .00.
5 Discussion
Children’s tendency to treat robots as social, mental, and moral others (Kahn et al. 2012) may partly result from the way in which social robots are presented to them: as autonomous entities that tell children about themselves during CRI. Against this background, we experimentally investigated whether and how transparency about the teleoperation procedure and a robot’s engagement in self-description affect children’s perception of a social robot and their sense of relationship formation with it.
5.1 Effects of transparent teleoperation
Children’s lower ratings of the robot’s anthropomorphism and autonomy in the overt teleoperation condition suggest that transparency about the teleoperation procedure decreased children’s perceptions of the robot as a mental and moral other. At the same time, children’s views of the robot as an animate entity that is similar to themselves remained unaffected. A potential explanation of the absence of transparency effects on animacy and perceived similarity to the self is that, in all conditions, the content of the robot’s answers clearly communicated its mechanical nature. This may have influenced children’s ratings of the robot’s animacy and perceived similarity, which fell closely around (for animacy) or somewhat below (for perceived similarity) the center point of the answer scale across the factors (see Table 1). In other words, children tended to slightly disagree with robot’s similarity to themselves and were generally undecided about its animacy.
Children’s experience of the robot as a socially present entity was independent of their awareness of the teleoperation procedure, which may be explained by our operationalization of the concept. The items that we used to assess social presence asked children about their experience of the robot as a humanlike presence (e.g., “When I was talking to Nao, it felt as though I was with a person”). Children in the covert teleoperation condition may have experienced the robot as socially present because of its seemingly autonomous working. Children in the overt teleoperation condition, in contrast, may have experienced the robot’s presence as humanlike because of, rather than despite, their knowledge of the teleoperator’s involvement in the interaction. Although their perception of the robot in terms of humanlike capacities decreased (i.e., anthropomorphism), children may thus have interpreted the ‘human behind the machine’ as a reason to ascribe humanlike presence to the robot.
Children’s relationship formation with the robot in terms of closeness and trust was unaffected by transparency about the teleoperation procedure. As noted by Serpell (2003) in the context of human-animal bonding, the inability of non-human others to lie, criticize, and betray may foster a sense of support and intimacy. Likewise, judging from children’s comments during the experiment, the robot’s lack of autonomy may have given children reasons to trust the robot: The inability to act on its own disables the robot, in the children’s view, from behaving unreliably (e.g., the robot is unable to pass on secrets), and the preprogrammed nature of its responses may have prevented children from questioning the robot’s honesty. As children’s comments only provide initial, anecdotal evidence for this line of reasoning, future research should further investigate this possibility.
Turkle (2007) has argued that children bond with relational artifacts “not because of what these objects [can] do (physically or cognitively) but because of the children’s emotional connection to the objects and their fantasies about how the objects might be feeling about them” (Turkle 2007, p. 507). In contrast to this statement, a recent study (van Straten et al. 2020c) found that children trusted a robot less when they were made aware of its lack of human psychological capacities (i.e., intelligence, self-consciousness, emotionality, identity construction, and social cognition). Still, in the present study, children tended to be aware of the robot’s lack of autonomy and human-likeness but tended not to care about it. It thus seems too early to conclude that one particular robot feature is responsible for child-robot relationship formation. Possibly, children’s persistent view of social robots as potential friends is determined in part by their experience and in part by the capacities of a robot.
5.2 Effects of self-description
The robot’s self-description (operationalized as self-reference through personal pronouns) did not affect children’s perceptions of the robot in terms of animacy, autonomy, anthropomorphism, or social presence. Similar to the absent effect of transparency on animacy, the absence of an effect of self-description on animacy may result from the content of the interaction. More generally, the robot’s avoidance of self-reference may have appeared less meaningful to children than expected because of the emphasis of the robot’s answers on its own technological nature. Against this background, children may not have been surprised when the robot did not refer to itself by using “I”.
The content of the robot’s answers may also explain why children perceived the robot to be less similar to themselves, and thus as less of a ‘mental other’, when it engaged in self-description. Across the factors, children perceived the robot to be rather dissimilar to themselves (see mean similarity scores in Table 1), which may be a consequence of the robot explaining to the children that it does not possess characteristics such as biological sex or age. The robot’s use of the pronoun “I” may have emphasized even more strongly to children how this robot in particular, rather than robots in general, fundamentally differs from them with respect to such characteristics, resulting in an adverse effect of self-reference on perceived similarity.
Children’s feelings of closeness toward, and trust in, the robot also remained unaffected by the robot’s self-description. Next to children’s general persistence in considering social robots as potential friends (see above), the absent effects of self-description on child-robot relationship formation may indicate that self-disclosure may be more effective than self-description when the aim is to promote the emergence of a social relationship between a child and robot. Even though the importance of intimacy to friendships is still developing during primary school years (e.g., Furman and Bierman 1984; Laursen and Hartup 2002), some more ‘private’ information may need to be shared to further increase children’s feelings of closeness toward, and trust in, a robot.
Alternatively, and in light of the robot’s openness about its mechanical nature, children may not have considered the robot an individual, but rather an interchangeable mechanical entity. The robot’s provision of general information about robots may have reduced their uncertainty about this robot to the same extent as when it provided information specifically about itself. By extension, children may thus have seen friendship potential in social robots generally, rather than in this particular robot—which would make any child-robot relationship that emerged rather impersonal (see also Fox and Gambino 2021, on HRI). This explanation is supported by the outcomes of the treatment check. The mean score of children to whom the robot did not self-describe fell near the center point of the semantic differential asking them to indicate whether the robot had talked about itself or about other robots. Apparently, the children in the condition in which the robot only talked about robots in general still thought that the robot had, indirectly, provided them with information about itself.
5.3 Limitations
Our study has four limitations. First, our operationalization of the robot’s self-description was rather unobtrusive (i.e., only the use of pronouns differed between the conditions). Being aware of its subtleness, we opted for this operationalization because it constitutes the only way to manipulate the robot’s provision of self-related information without altering the content of the interaction. Second and relatedly, the effects of transparency about the teleoperation procedure might have been stronger if the experimenter, in the overt teleoperation condition, had controlled the robot within children’s direct line of sight (i.e., sitting down next to the child). Instead, she sat down behind a table which, depending on the room in which the experiment was conducted, was more or less visible to children when facing the robot. Third, the robot’s openness about its technological status during the interaction may have obscured expected findings. However, our goal was to investigate children’s responses to robots as they are currently entering our society without actively portraying them as social, mental, and moral others. The robot’s answers thus had to provide realistic information.
Fourth and finally, our expectations about the effects of transparent teleoperation and self-description on children’s perception of, and relationship formation with, the robot may have been biased by what can be expected in an interpersonal context, when adopting an adult perspective: If a human refuses to refer to him/herself saying “I” and only speaks of “people” in general, this is considered odd. But if a robot describes characteristics of “robots” rather than of itself, this may match its machinelike status and thus be acceptable. In addition, it may seem evident to adults that when a robot is being remotely controlled, it lacks social presence and does, by extension, not qualify as a potential friend. Yet the absent effects on social presence and trust, in particular, demonstrate that children’s reasoning about the teleoperator’s involvement in the interaction may follow different patterns. While our questionnaire only included closed-ended questions, the inclusion of open-ended ones may aid future studies to further elucidate children’s reasoning about robots.
6 Conclusions
Our findings tentatively suggest that research on children’s interactions with robots in general, and on relationship formation between children and robots in particular, may benefit from some reorientation. First, CRI research should also investigate children’s responses to robots in realistic interaction settings that leave the depiction of robots as social, mental, and moral others up to children’s imagination instead of, actively or passively, portraying robots in ways that do not match their current status and capacities. Insights from such studies may help to inform societal debates about the benefits and drawbacks of social robots in children’s lives. Second, instead of focusing our attention on similarities between interpersonal communication and CRI, we should also ask ourselves how the mechanisms of children’s responses to social robots may deviate from interpersonal processes (see also Fox and Gambino 2021, the broader context of HRI). When children are (made) aware of the differences between robots and humans, interpersonal principles may less seamlessly apply than they seem to do now.
Our findings also indicate that children may consider robots as potential friends regardless of their knowledge of the robot’s teleoperated working and its engagement in self-description. A societally important implication of this finding is that it may be possible to reach potential benefits of child-robot relationship formation (e.g., in education and healthcare applications; Kory Westlund and Breazeal 2019a; Sinoo et al. 2018) without ‘deceiving’ children into thinking robots are more capable and social than they currently are. Future research should investigate whether our findings with respect to the emergence of children’s initial sense of relationship with robots extend to situations in which children interact with robots on a long-term basis. If so, robotic companions could be used in, for example, healthcare and educational settings while minimizing possible negative consequences of child-robot relationship formation (e.g., disappointment about robots’ actual ‘friendship potential’ upon discovering their teleoperated nature; see Kory Westlund and Breazeal 2015).
In short, we need to keep reminding ourselves that “robots are not people” (Dautenhahn 2007, p. 104)—and shape the research agenda accordingly—to critically explore the full range of possible societal implications of social robots for children. Further elucidating the boundary conditions of child-robot relationship formation would advance our understanding of the characteristics of robots that are necessary or sufficient to support children—whether as a complement to or a temporary replacement of interpersonal interaction (e.g., during radiation therapy; Ligthart et al. 2019b). While future robots may be (more) autonomous and may have a wider range of (increasingly humanlike) characteristics and capacities than current robots, the distinction between humans and robots will remain relevant (Fox and Gambino 2021), and, if not overlooked, may be integrated in CRI scenarios to allow for rewarding interactions between robots and children.
Data availability
Data are available from the first author upon request.
Code availability
Not applicable.
Notes
The studies are on self-disclosure. However, the intimacy of the information shared by the robots was low to moderate (e.g., small talk about pets, information about the robot’s technological working; see Archer and Berg 1978), while self-disclosure refers to the sharing of more intimate, private information (Culbert 1967, as cited in Gilbert 1976). Thus, it may be argued that these studies investigated self-description rather than self-disclosure.
We assessed children’s perceptions of the robot as a moral other based on their judgements of its degree of autonomy and, by extension, accountability. Earlier studies have used narrower operationalizations (e.g., the acceptability of physical or verbal abuse (Chernyak and Gary 2016); the right to be treated according to basic human rights (Kahn et al. 2012). This broader, more indirect approach suited the focus of our study better and limited the likelihood of ceiling effects as encountered by Chernyak and Gary (2016), which indicated that children generally thought the robot should not be mistreated.
References
American Psychological Association (2013) Diagnostic and statistical manual of mental disorders, 5th edn. American Psychological Association, Washington
Ames DR (2004) Inside the mind reader’s toolkit: projection and stereotyping in mental state inference. J Pers Soc Psychol 87(3):340–353. https://doi.org/10.1037/0022-3514.87.3.340
Archer RL, Berg JH (1978) Disclosure reciprocity and its limits: a reactance analysis. J Exp Soc Psychol 14(6):527–540. https://doi.org/10.1016/0022-1031(78)90047-1
Barber KS, Goel A, Martin CE (2000) Dynamic adaptive autonomy in multi-agent systems. J Exp Theor Artif Intell 12(2):129–147. https://doi.org/10.1080/095281300409793
Baroni I, Nalin M, Baxter P, Pozzi C, Oleari E, Sanna A, Belpaeme T (2014) What a robotic companion could do for a diabetic child. Proceedings of the 23rd international symposium on robot and human interactive communication, pp 936–941. https://doi.org/10.1109/ROMAN.2014.6926373
Bartneck C, Kulić D, Croft E, Zoghbi S (2009) Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int J Soc Robot 1(1):71–81. https://doi.org/10.1007/s12369-008-0001-3
Bauminger N, Finzi-Dottan R, Chason S, Har-Even D (2008) Intimacy in adolescent friendship: the roles of attachment, coherence, and self-disclosure. J Soc Pers Relat 25(3):409–428. https://doi.org/10.1177/0265407508090866
Bauminger-Zviely N, Agam-Ben-Artzi G (2014) Young friendship in HFASD and typical development: friend versus non-friend comparisons. J Autism Dev Disord 44(7):1733–1748. https://doi.org/10.1007/s10803-014-2052-7
Berger CR, Calabrese RJ (1975) Some explorations in initial interaction and beyond: toward a developmental theory of interpersonal communication. Hum Commun Res 1(2):99–112. https://doi.org/10.1111/j.1468-2958.1975.tb00258.x
Berndt TJ (2004) Children’s friendships: shifts over a half-century in perspectives on their development and their effects. Merrill Palmer Q 50(3):206–223. https://doi.org/10.1353/mpq.2004.0014
Berscheid E, Regan P (2005) The psychology of interpersonal relationships. Pearson Education, New Jersey
Breazeal CL, Dautenhahn K, Kanda T (2016) Social robotics. In: Siciliano B, Khatib O (eds) Springer handbook of robotics. Springer, Heidelberg, pp 1935–1971 https://doi.org/10.1007/978-3-319-32552-1_72
Brennan SE, Ohaeri JO (1994) Effects of message style on users’ attributions toward agents. Proceedings of the conference on human factors in computing systems, pp 281–282. https://doi.org/10.1145/259963.260492
Bumby K, Dautenhahn K (1999) Investigating children’s attitudes towards robots: a case study. Proceedings of the third international cognitive technology conference. https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.40.2906&rep=rep1&type=pdf
Calvo-Barajas N, Perugia G, Castellano G (2020) The effects of robot’s facial expressions on children’s first impressions of trustworthiness. Proceedings of the 29th international conference on robot and human interactive communication, pp 165–171. https://doi.org/10.1109/RO-MAN47096.2020.9223456
Cameron D, Fernando S, Collins EC, Millings A, Szollosy M, Moore R, Prescott T et al. (2017) You made him be alive: children’s perceptions of animacy in a humanoid robot. Proceedings of the conference on biomimetic and biohybrid systems, pp 73–85. https://doi.org/10.1007/978-3-319-63537-8_7
Chernyak N, Gary HE (2016) Children’s cognitive and behavioral reactions to an autonomous versus controlled social robot dog. Early Educ Dev 27(8):1175–1189. https://doi.org/10.1080/10409289.2016.1158611
Cole M, Cole S, Lightfoot C (2005) The development of children, 5th edn. Worth, New York
Culbert S (1967) The interpersonal process of self-disclosure: it takes two to know one. In: Hart JT, Tomlinson T (eds) New directions in client-centered therapy. Houghton Mifflin, Boston
Curtis JM (1981) Effect of therapist’s self-disclosure on patients’ impressions of empathy, competence, and trust in an analogue of a psychotherapeutic interaction. Psychol Rep 48(1):127–136. https://doi.org/10.2466/pr0.1981.48.1.127
Dautenhahn K (2007) Methodology & themes of human-robot interaction: a growing research field. Int J Adv Robot Syst 4(1):103–108. https://doi.org/10.5772/5702
de Haas M, Aroyo AM, Barakova E, Haselager W, Smeekens I (2016) The effect of a semi-autonomous robot on children. Proceedings of the eighth international conference on intelligent systems, pp 376–381. https://doi.org/10.1109/IS.2016.7737448
de Jong C, Kühne R, Peter J, van Straten CL, Barco A (2020) Intentional acceptance of social robots: development and validation of a self-report measure for children. Int J Hum-Comput Stud 139:102426. https://doi.org/10.1016/j.ijhcs.2020.102426
Edwards A, Edwards C, Westerman D, Spence PR (2019) Initial expectations, interactions, and beyond with social robots. Comput Hum Behav 90:308–314. https://doi.org/10.1016/j.chb.2018.08.042
Eisenmajer R, Prior M, Leekam S, Wing L, Gould J, Welham M, Ong B (1996) Comparison of clinical symptoms in autism and Asperger’s disorder. J Am Acad Child Adolesc Psychiatry 35(11):1523–1531. https://doi.org/10.1097/00004583-199611000-00022
Emmeche C (2014) Robot friendship: can a robot be a friend? Int J Signs Semiot Syst 3(2):26–42. https://doi.org/10.4018/ijsss.2014070103
Epley N, Waytz A, Cacioppo JT (2007) On seeing human: a three-factor theory of anthropomorphism. Psychol Rev 114(4):864–886. https://doi.org/10.1037/0033-295X.114.4.864
Epley N, Waytz A (2010) Mind perception. In: Alicke M, Apperly I, Fiske S, Gilbert D, Malle B, Mitchell J, Wegner D et al (eds) Handbook of social psychology. Wiley & Sons, New York, pp 498–541
Eyssel F, Wullenkord R, Nitsch V (2017) The role of self-disclosure in human-robot interaction. Proceedings of the sixth international symposium on robot and human interactive communication, pp 922–927. https://doi.org/10.1109/ROMAN.2017.8172413
Fox J, Gambino A (2021) Relationship development with humanoid social robots: applying interpersonal theories to human/robot interaction. Cyberpsychol Behav Soc Netw. https://doi.org/10.1089/cyber.2020.0181
Furman W, Bierman KL (1984) Children’s conceptions of friendship: a multimethod study of developmental changes. Dev Psychol 20(5):925–931. https://doi.org/10.1037/0012-1649.20.5.925
George D, Mallery M (2010) SPSS for Windows step by step: a simple guide and reference 17.0 update, 10th edn. Pearson, Boston
Gilbert SJ (1976) Empirical and theoretical extensions of self-disclosure. In: Miller GR (ed) Explorations in interpersonal communication. Sage Publications, Beverly Hills, pp 197–215
Gray HM, Gray K, Wegner DM (2007) Dimensions of mind perception. Science 315(5812):619. https://doi.org/10.1126/science.1134475
Hausman J, Palmer C (2012) Heteroskedasticity-robust inference in finite samples. Econ Lett 116(2):232–235. https://doi.org/10.1016/j.econlet.2012.02.007
Heerink M, Kröse B, Evers V, Wielinga B (2010) Assessing acceptance of assistive social agent technology by older adults: the Almere model. Int J Soc Robot 2(4):361–375. https://doi.org/10.1007/s12369-010-0068-5
Ho CC, MacDorman KF (2010) Revisiting the uncanny valley theory: developing and validating an alternative to the Godspeed indices. Comput Hum Behav 26(6):1508–1518. https://doi.org/10.1016/j.chb.2010.05.015
Huang A, Lee F, Nass C, Paik Y, Swartz L (2001) Can voice user interfaces say “I”? An experiment with recorded speech and TTS. https://www.researchgate.net/profile/Clifford_Nass/publication/228822009_Can_voice_user_interfaces_say_I_An_experiment_with_recorded_speech_and_TTS/links/09e4151086142dafe8000000.pdf. Accessed 11 May 2020
Johnson DG (2011) Software agents, anticipatory ethics, and accountability. In: Marchant GE et al (eds) The growing gap between emerging technologies and legal-ethical oversight. Springer, Dordrecht, The Netherlands, pp 61–76
Kahn PH, Kanda T, Ishiguro H, Freier NG, Severson RL, Gill BT, Shen S et al (2012) “Robovie, you’ll have to go into the closet now”: children’s social and moral relationships with a humanoid robot. Dev Psychol 48(2):303–314. https://doi.org/10.1037/a0027033
Kahn PH, Gary HE, Shen S (2013) Children’s social relationships with current and near-future robots. Child Dev Perspect 7(1):32–37. https://doi.org/10.1111/cdep.12011
Kanda T, Sato R, Saiwaki N, Ishiguro H (2007) A two-month field trial in an elementary school for long-term human-robot interaction. IEEE Trans Robot 23(5):962–971. https://doi.org/10.1109/tro.2007.904904
Keller J (1997) Autonomy, relationality, and feminist ethics. Hypatia 12(2):152–164. https://doi.org/10.1111/j.1527-2001.1997.tb00024.x
Kelley JF (1984) An iterative design methodology for user-friendly natural language office information applications. ACM Trans Inf Syst 2(1):26–41. https://doi.org/10.1145/357417.357420
Kory-Westlund JM, Breazeal CL (2015) Deception, secrets, children, and robots: what’s acceptable? Presented at theworkshop “the emerging policy and ethics of human-robot interaction” at the 10th international conference on human-robot interaction. http://www.openroboethics.org/hri15/wp-content/uploads/2015/02/Mf-Westlund.pdf. Accessed 11 May 2020
Kory-Westlund JM, Breazeal CL (2016) Transparency, teleoperation, and children’s understanding of social robots. Proceedings of the 11th international conference on human-robot interaction, pp 625–626. https://doi.org/10.1109/HRI.2016.7451888
Kory-Westlund JM, Breazeal CL (2019a) A long-term study of young children’s rapport, social emulation, and language learning with a peer-like robot playmate in preschool. Front Robot AI. https://doi.org/10.3389/frobt.2019.00081
Kory-Westlund JM, Breazeal C (2019b) Exploring the effects of a social robot’s speech entrainment and backstory on young children’s emotion, rapport, relationship, and learning. Front Robot AI 6:54. https://doi.org/10.3389/frobt.2019.00054
Kory-Westlund JM, Martinez M, Archie M, Das M, Breazeal CL (2016) Effects of framing a robot as a social agent or as a machine on children’s social behavior. Proceedings of the 25th international symposium on robot and human interactive communication, pp 688–693. https://doi.org/10.1109/roman.2016.7745193
Kory-Westlund JM, Park HW, Williams R, Breazeal CL (2018) Measuring children’s long-term relationships with social robots. Proceedings of the 17th conference on interaction design and children, pp 207–218. https://doi.org/10.1145/3202185.3202732
Kruijff-Korbayová I, Gerstenberger C, Kukina O, Schehl J (2008) Generation of output style variation in the SAMMIE dialogue system. Proceedings of the fifth international natural language generation conference, pp 129–137. https://doi.org/10.3115/1708322.1708347
Larzelere RE, Huston TL (1980) The dyadic trust scale: toward understanding interpersonal trust in close relationships. J Marriage Fam 42(3):595–604. https://doi.org/10.2307/351903
Laursen B, Hartup WW (2002) The origins of reciprocity and social exchange in friendships. New Dir Child Adolesc Dev 95:27–40. https://doi.org/10.1002/cd.35
Lee KM (2004) Presence, explicated. Commun Theory 14(1):27–50. https://doi.org/10.1111/j.1468-2885.2004.tb00302.x
Leite I, Lehman JF (2016) The robot who knew too much: toward understanding the privacy/personalization trade-off in child-robot conversation. Proceedings of the 15th conference on interaction design and children, pp 379–387. https://doi.org/10.1145/2930674.2930687
Leite I, Pereira A, Lehman JF (2017) Persistent memory in repeated child–robot conversations. Proceedings of the conference on interaction design and children, pp 238–247. https://doi.org/10.1145/3078072.3079728
Lewis M (2011) The origins and uses of self-awareness or the mental representation of me. Conscious Cogn 20(1):120–129. https://doi.org/10.1016/j.concog.2010.11.002
Ligthart MEU, Fernhout T, Neerincx MA, van Bindsbergen KLA, Grootenhuis MA, Hindriks KV (2019a) A child and a robot getting acquainted: interaction design for eliciting self-disclosure. Proceedings of the international joint conference on autonomous agents and multiagent systems, pp 61–70. http://edithlaw.ca/teaching/cs889/w20/readings/disclosure.pdf
Ligthart MEU, Neerincx MA, Hindriks KV (2019b) Getting acquainted for a long-term child-robot interaction. Proceedings of the international conference on social robotics, pp 423–433. https://doi.org/10.1007/978-3-030-35888-4_39
Ligthart MEU, Neerincx MA, Hindriks KV (2020) Design patterns for an interactive storytelling robot to support children’s engagement and agency. Proceedings of the international conference on human-robot interaction (virtual), pp 409–418. https://doi.org/10.1145/3319502.3374826
Lutz C, Schöttler M, Hoffmann CP (2019) The privacy implications of social robots: scoping review and expert interviews. Mobile Media Commun 7(3):412–434. https://doi.org/10.1177/2050157919843961
McCroskey JC, Richmond VP, Daly JA (1975) The development of a measure of perceived homophily in interpersonal communication. Hum Commun Res 1(4):323–332. https://doi.org/10.1111/j.1468-2958.1975.tb00281.x
Nass C, Brave S (2005) Wired for speech: how voice activates and advances the human-computer relationship. MIT Press, Cambridge
Neeley EL (2014) Machines and the moral community. Philos Technol 27(1):97–111. https://doi.org/10.1007/s13347-013-0114-y
Oliveira R, Arriaga P, Santos FP, Mascarenhas S, Paiva A (2021) Towards prosocial design: a scoping review of the use of robots and virtual agents to trigger prosocial behaviour. Comput Hum Behav 114:106547. https://doi.org/10.1016/j.chb.2020.106547
Peter J, Kühne R, Barco A, de Jong C, van Straten CL (2019) Asking today the crucial questions of tomorrow: social robots and the internet of toys. In: Holloway D (ed) The internet of toys. Palgrave Macmillan, Cham, pp 25–46. https://doi.org/10.1007/978-3-030-10898-4_2
Read J, Mazzone E, Höysniemi J (2005) Wizard of Oz evaluations with children: deception and discovery. Proceedings of the fourth conference on interaction design and children. https://s3.amazonaws.com/academia.edu.documents/30795328/wizard_of_oz_evaluations.pdf. Accessed 11 May 2020
Rijsdijk SA, Hultink EJ (2003) “Honey, have you seen our hamster?” Consumer evaluations of autonomous domestic products. J Prod Innov Manag 20(3):204–216. https://doi.org/10.1111/1540-5885.2003003
Roloff ME (1976) Communication strategies, relationships, and relational change. In: Miller GR (ed) Explorations in interpersonal communication. Sage Publications, Beverly Hills, pp 173–195
Rosenthal-von der Pütten A, Strasmann C, Mara M (2017) A long time ago in a galaxy far, far away... The effects of narration and appearance on the perception of robots. Proceedings of the 26th international symposium on robot and human interactive communication, pp 1169–1174. https://doi.org/10.1109/ROMAN.2017.8172452
Schadenberg BR, Neerincx MA, Cnossen F, Looije R (2017) Personalising game difficulty to keep children motivated to play with a social robot: a Bayesian approach. Cogn Syst Res 43:222–231. https://doi.org/10.1016/j.cogsys.2016.08.003
Scheutz M (2012) The inherent dangers of unidirectional emotional bonds between humans and social robots. In: Lin P, Abney K, Bekey GA (eds) Robot ethics: the ethical and social implications of robotics. MIT Press, Cambridge, pp 205–221
Serpell JA (2003) Anthropomorphism and anthropomorphic selection: beyond the “cute response.” Soc Anim 11(1):83–100. https://doi.org/10.1163/156853003321618864
Severson RL, Lemm KM (2016) Kids see human too: adapting an individual differences measure of anthropomorphism for a child sample. J Cogn Dev 17(1):122–141. https://doi.org/10.1080/15248372.2014.989445
Shiomi M, Kanda T, Howley I, Hayashi K, Hagita N (2015) Can a social robot stimulate science curiosity in classrooms? Int J Soc Robot 7(5):641–652. https://doi.org/10.1007/s12369-015-0303-1
Sinoo C, van der Pal S, Henkemans OAB, Keizer A, Bierman BPB, Looije R, Neerincx MA (2018) Friendship with a robot: children’s perception of similarity between a robot’s physical and virtual embodiment that supports diabetes self-management. Patient Educ Couns 101(7):1248–1255. https://doi.org/10.1016/j.pec.2018.02.008
Somanader MC, Saylor MM, Levin DT (2011) Remote control and children’s understanding of robots. J Exp Child Psychol 109(2):239–247. https://doi.org/10.1016/j.jecp.2011.01.005
Stafford L (2004) Communication competencies and sociocultural priorities of middle childhood. In: Vangelisti AL (ed) Handbook of family communications. Lawrence Erlbaum, Mahwah, pp 311–332
Sternberg RJ (1987) Liking versus loving: a comparative evaluation of theories. Psychol Bull 102(3):331–343. https://doi.org/10.1037/0033-2909.102.3.331
Stower R, Calvo-Barajas N, Castellano G, Kappas A (2021) A meta-analysis on children’s trust in social robots. Int J Soc Robot. https://doi.org/10.1007/s12369-020-00736-8
Tolksdorf NF, Siebert S, Zorn I, Horwath I, Rohlfing KJ (2020) Ethical considerations of applying robots in kindergarten settings: towards an approach from a macroperspective. Int J Soc Robot. https://doi.org/10.1007/s12369-020-00622-3
Tozadore D, Pinto A, Romero R, Trovato G (2017) Wizard of Oz vs autonomous: children’s perception changes according to robot’s operation condition. Proceedings of the 26th international symposium on robot and human interactive communication, pp 664–669.https://doi.org/10.1109/ROMAN.2017.8172374
Turkle S (2007) Authenticity in the age of digital companions. Interact Stud 8(3):501–517. https://doi.org/10.1017/CBO9780511978036.006
Turkle S, Breazeal C, Dasté O, Scassellati B (2006) First encounters with Kismet and Cog: children respond to relational artifacts. In: Messaris P, Humphreys L (eds) Digital media: transformations in human communication. Peter Lang Publishing, New York, pp 313–330
van den Berghe R, Verhagen J, Oudgenoeg-Paz O, van der Ven S, Leseman P (2019) Social robots for language learning: a review. Rev Educ Res 89(2):259–295. https://doi.org/10.3102/0034654318821286
van der Drift E, Beun RJ, Looije R, Blanson Henkemans O (2014) A remote social robot to motivate and support diabetic children in keeping a diary. Proceedings of the ninth international conference on human-robot interaction, pp 463–470. https://doi.org/10.1145/2559636.2559664
van Straten CL, Peter J, Kühne R, de Jong C, Barco A (2018) Technological and interpersonal trust in child-robot interaction: an exploratory study. Proceedings of the sixth international conference on human-agent interaction, pp 253–259. https://doi.org/10.1145/3284432.3284440
van Straten CL, Kühne R, Peter J, de Jong C, Barco A (2020a) Closeness, trust, and perceived social support in child-robot relationship formation: development and validation of three self-report scales. Interact Stud 21(1):57–84. https://doi.org/10.1075/is.18052.str
van Straten CL, Peter J, Kühne R (2020b) Child–robot relationship formation: a narrative review of empirical research. Int J Soc Robot 12(2):325–344. https://doi.org/10.1007/s12369-019-00569-0
van Straten CL, Peter J, Kühne R, Barco A (2020c) Transparency about a robot’s lack of human psychological capacities: effects on child-robot perception and relationship formation. ACM Trans Hum-Robot Interact. https://doi.org/10.1145/3365668
Wrigley A (2007) Personal identity, autonomy and advance statements. J Appl Philos 24(4):381–396. https://doi.org/10.1111/j.1468-5930.2007.00367.x
Acknowledgements
This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program [Grant Agreement No. (682733)] to the second author. We would also like to express our gratitude to the participating schools, as well as to the (parents of) children who participated in this study.
Author information
Authors and Affiliations
Contributions
Conceptualization: (all authors); methodology: (all authors); data collection: (CLvS); formal analysis and investigation: (CLvS, JP, RK); writing—original draft preparation: (CLvS); writing—review and editing: (all authors); funding acquisition: (JP).
Corresponding author
Ethics declarations
Conflict of interest
The authors declare to have no conflicts of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
van Straten, C.L., Peter, J., Kühne, R. et al. The wizard and I: How transparent teleoperation and self-description (do not) affect children’s robot perceptions and child-robot relationship formation. AI & Soc 37, 383–399 (2022). https://doi.org/10.1007/s00146-021-01202-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-021-01202-3