1 Introduction

Within the area of child–robot interaction (cHRI) research, many studies strive to develop interactive systems that are not only designed for the purpose to be applied in kindergarten settings, but their effects are actually already evaluated within kindergarten settings. An evaluation has the goal to reveal problems and limitations that children face when interacting with a system [1]. Often, it is also used to examine whether children gain any advantage from this interaction [2], while the definition of advantages can range from learning benefits to mere entertainment [3]. Overall, in an optimal case, an evaluation attests to the system’s usability [4]: a system is used by naive users (kindergarten children) in an everyday setting (the kindergarten), performs with a certain duration and reliability, and yields some benefits (e.g., learning effects) for the users. To some degree, the state of the art of human–robot interaction systems even requires evaluations: a study comprising not only the design of a system, but also its evaluation within the target context, is considered as more complete, valid, and reliable than without a “successful” evaluation [5]. However, the quality, depth, and validity of any evaluation strongly depend on the quality of its indicators, their reliability, and how well they meet the particular requirements of the specific target context—including ethical requirements.

Though there are recurrent and overlapping ethical issues related to interactive systems, every context in which an interaction takes place—which also pertains to the context of a kindergarten as an institution—poses its own set of ethical questions and social norms that need to be thoroughly and systematically examined. In current studies on cHRI, those social norms, specific rules, and context-sensitive practices are barely considered. However, children are a vulnerable group whose perception, communicative needs, and emotions are different than those of adults—a reason for justifying an especially intense consideration of possible unintended consequences. Accordingly, in the following, we will argue that the methodology in the field of cHRI should go beyond a design that merely focuses on the evaluation of quantifiable effects (e.g., learning outcomes) and usability. We argue, that currently used methodologies are limited to a microperspective which takes individual children, their cognitive skills, and behaviors or emotional displays into account. What is lacking, however, is a macroperspective that addresses the impact of the technology on a more global level with respect to its use within an institutional setting such as the kindergarten. Further, there has also been little focus on the impacts the introduction of social robots in kindergartens could have and the degree to which the interactive content and design address institutional goals, such as ensuring inclusive settings and fostering autonomy, the institutional trust in the pedagogically motivated values and concepts, the role of the involved key stakeholders (e.g., parents and educators), and the expectations of the activities in the institutional kindergarten context.

In the following, we will first give a brief summary of how socially interactive robots have been defined in current research, how these definitions relate to specific ethical concerns, and why a macroperspective is required to broaden the scope of these debates. We will then introduce some important ethical concerns which are of particular relevance for cHRI. We derived these ethical concerns from two sources: the first source are international state of the art studies on cHRI and the way child–robot interaction is conceived [6,7,8,9,10,11,12]. We will then proceed with contextualizing social robots in the use of a kindergarten. Taking a macroperspective, we point out the role of kindergarten as an institution. The second source for our ethical concerns are empirical insights gained through our own research on kindergarten-age children [13,14,15,16]. Here, we will highlight the caregiver’s role. Structuring our presentation along relevant aspects such as human–robot interaction, kindergarten as an institution, children as a vulnerable group, caregiver’s role and pedagogical concepts within kindergarten, we will point out potential risks and “blind spots” in current practices in cHRI research. On this basis, we will conclude with a discussion of practical implications that need to be considered before commencing any cHRI studies in kindergartens. Although our list is not exhaustive, we aim to foster a more systematic approach from a macroperspective to ethical considerations when designing cHRI studies and to raise awareness of the unintended consequences cHRI research may have when studying vulnerable groups.

2 Ethical Aspects of Human–Robot Interaction

As a prerequisite of our considerations, we deliver a definition of socially interactive robots that we have in mind when elaborating their use and function in context of a kindergarten on the microperspective. Hegel et al. [17] attribute to a socially interactive robot not only the recognition and use of social signals such as gaze and gestures but also the cognitive ability to memorize experience made within a social interaction and to learn from it. To realize the property of learning Breazeal [18] highlights the importance of shared (real or virtual) environments that open up the possibility to experience entities in interaction—material out of which learning contents can be formulated. Furthermore, she claims that the capability to identify persons, i.e., with who the robot is interacting as well as what and how the person is doing, is necessary for a robot to learn and a prerequisite to successful interaction. Bringing these elements together, the learning setting seems to afford both, awareness of shared environment as well as a model of the (activities of the) interaction partner. In recent research, learning capabilities of a robot are pushed forward in a specific way: Charisi et al. [19] propose a symbiotic co-development, i.e., a dynamic interaction in which both, the robot and a child could learn from each other through joint activities in collaborative tasks.

While Breazeal [18] focuses on cognitive properties of the robot, Hegel et al. [17] emphasize the robot’s social form, social function, and social context as categories relevant for designing social robots. Social forms, e.g., the robot’s eyes are considered to have a specific social function, such as the ability to perceive the interaction partner as well as other entities in the world. For children, it has been documented that when a robot is equipped with anthropomorphic features, they expect it to behave humanlike [20]. What seems obvious from the above-mentioned definition is that current technology construct human–robot interactions “along the rules of social behaviors in humanlike interpersonal interactions, which invites people to have meaningful social interactions with robots” [21, p. 590]. Because of these governing rules, de Graaf [21] raises important issues regarding the ethics of the interaction with such technology. One issue is about bonding as a finding attesting to the human fundamental motivation to not only induce desire for meaningful and enduring relationships with other social beings, but also emotional attachments to artificial beings. In child–robot interaction, this issue is even reinforced by studies demonstrating that children not only treat robots as informants and therefore trustworthy interaction partners [22] but are more likely than adults to follow the robots’ suggestions [23].

In the definition of socially interactive robots proposed by Hegel et al. [17], an important expansion concerns the involvement of the social context: the authors point to the fact that both, form (e.g., eyes) and function (e.g., visual perception), mentioned above serve the social context. However, it seems that only the fulfillment of specific tasks (e.g., games or cleaning up) and specific roles (e.g., game partner or assistant) are comprised by the term of social context. By sharing a physical environment with a human, robots are capable to produce “a physical impact” [21, p. 592] on this environment in a positive way by fulfilling their tasks, such as taking things from one place to the other, or in a negative way by tripping over objects or running into people. However, the social context should be considered more broadly by including the full macroperspective proposed by de Graaf [21], which involves social norms, values, and morals, that is, robots enter both the physical and social environment as they perform actions that interweave with the human intentions and social goals: “Once a robot has entered a social environment, it will alter the distribution of responsibilities and roles within that environment as well as how people act in that use context or situation” [21, p. 592]. Consequently, technology that is part of a social environment shapes and thus changes the social context, that is, it changes the way people perceive, perform actions, and create new practices [21]. Therefore, besides the technical properties or actual capabilities of the robot, it is vital that the social context of the robot presence be acknowledged in terms of the moral relevance the technology has in mediating certain beliefs and practices [21].

3 Kindergarten as an Institution

Against this background of what socially interactive robots are and how they have the potential to influence social contexts, in the following, the issues that are relevant to cHRI research in kindergarten institutional settings are elucidated.

3.1 Institution as a Shelter

As institutions of early childhood education and care, kindergartens need to follow specific rules, norms, values, and practices. The specific rules pertain to the function kindergarten serves for parents relying on the possibility to share the care for their children with professional staff. Furthermore, the norms and values in kindergarten usually follow pedagogical concepts that focus on the integration of education and the provision of care tailored to the needs of children, families and the broader society [24]. In addition, the pedagogical concepts are reflected in the activities that take place within this particular social context [25]. In this vein, it has been shown that parents indeed expect a highly motivated pedagogical environment for their children in kindergarten and consider care and an educational environment to be major priorities [26]. Relying on pedagogical concepts, which implicitly contain the principle that they are beneficial for the children [27, 28], parents give an enormous credit of trust to a kindergarten and the professional staff.

Against this background of trust in pedagogically motivated values and concepts, the use of a social robot in the institutional context of a kindergarten raises two central questions: first, does the robot act in accordance with the social rules and pedagogical concepts of the kindergarten, and second, if not, will the use of a robot that is not geared to the pedagogical concepts of a kindergarten causes a loss of trust in the institution and the professional staff? Concerning trust, two further issues should be differentiated: on the one hand, the parents could trust the educators less because they see little correspondence between the robot interactions and the pedagogical concepts applied so far. To our knowledge, this issue was not considered in studies so far, and there is no information about to what degree a child–robot interaction can impact the relationship of educators and parents. On the other hand, an interaction with a robot could also impact children’s trust in their educators or in the technology. Charisi et al. [19] identified the predictability of a partner’s action as being crucial in establishing trust. In the context of a kindergarten, the predictability is less given when children are unable to predict their partner’s action, as it is the case with a technology that children are little familiar with. In this vein, we propose that the change in the children’s trust will strongly depends on the concordance between the children’s regular activities and the activities that the robot initiates during the interaction. Children are used to their daily routines in a kindergarten and have a familiar bond with their educators, which could be disturbed by a social robot if its use is not adapted to the activities within a kindergarten. To mitigate such daily routine discontinuities, a solution could be to substantially enhance warm-up activities, i.e., by familiarizing the children with the robot and its current capabilities in advance before the actual research is conducted. Recently, Vogt et al. [29] reported a successful script of how to introduce a robot to kindergarten children in a group first, and then individually to children, who are participating in a cHRI. However, the warm-up activity is often different from what the robot is performing during the actual study, because an experimental setup requires objective testing conditions, which might be compromised when warm-up activities are of a similar nature. Another negative impact could come from incorrectly working technology. A recent television report that examined the use of robots for children with atypical development [30] reported on a case in which a boy who was advancing his vocabulary and having a lot of fun with a robot experienced a defect when the robot gave an inadequate answer, which resulted in a loss of engagement and trust by the boy. Currently, socially interactive robots are not aware of their success in interaction. However, from this single observation, it appears that a kind of awareness of what went wrong and may have caused a loss of trust could repair an interaction in the long-term. In fact, recent research suggests that interactive behavior such as apologies can repair trust in older children [31]—an emotional ability and strategy that has recently been considered when designing robots [32] and could be a quite straightforward feature in dialogue designs.

In addition to the issue of trust mentioned above, the way interactions are organized within the social context of a kindergarten needs to be considered. When children attend a kindergarten, they not only rely on the familiar environment, they also enter a social context that is mainly group structured [33]. In this environment of a kindergarten, a major part of the activities takes place—structured or unstructured—in group contexts [33]. While some warm-ups in previous cHRI studies have been conducted with groups of children within the social organization of the kindergarten [13], the predominant format for these studies has been when social robots are applied in one-on-one situations, with the robot mostly fulfilling the role of a teacher or tutor [34]. Whereas this is mainly due to technical limitations, another aspect concerning the participation plays an even more important role: it is usually the case that not the entire group of children can participate in an interaction with the robot, either for practical reasons or because the parents do not give their consent. This fact inevitably leads to a divide within the group. Thus, one part of the group is excluded—a state that is atypical for kindergarten, because in everyday activities, children are used to experiencing all activities jointly. A possible disappointment with not having participated in the interaction may also affect their confidence within the group, towards the educators, and the technology. Admittedly, it is evident that other types of disruptions or discontinuities also occur in everyday settings in kindergarten [35], however, this circumstance should not undermine the ethical demands. In addition, these kinds of disruptions are rather emerging out (or a byproduct) of the everyday activities and are not designed for deviating from the usual activities. In contrast, the design of currently used social robots disregard the social context of activities and could therefore be considered as designed for discontinuities. Therefore, enhanced warm-ups could be a possibility to familiarize the children with the interactive properties of a robot and to reduce false expectations (e.g., an interaction in group contexts or in unscripted situations). In addition, adapting experimental procedures more closely fit to the usual activities and settings commonly present in kindergarten could be another possibility. For example, Conti et al. [11] employed a usual small-group painting activity in their experimental design and conducted the warm-up session with all children regardless of whether they were participating in the study, allowing all children to participate in at least some part of the activity.

While we are aware of the fact that other types of disruptions and all the disappointments and negative effects can also happen in everyday settings of the children, we would like to highlight the context of our argument: it is not the negative experience per se that we are considering as ethically problematic but the possible negative experience caused by the implementation of a robot not adjusted to the social context of a kindergarten—an institution devoted to provide a pedagogically motivated setting for activities. In addition, we wish to stress that most cHRI studies implicitly provide solutions to these problems as they have been approved by ethical committees. However, we are of the opinion that these problems need to become an explicit part of the methodology used in cHRI research to address the desired ethical regulations.

3.2 Legal Implications

The use of a robot in kindergarten settings can also lead to legal implications. With reference to Sharkey [36], we identified three dimensions that are crucial to consider:

3.2.1 Safety

In terms of safety [37], the question arises whether educators and children are safe when interacting with the robot; for instance, can the robot hurt or scare a child if it falls? As some activities taking place within the context of a kindergarten are highly unstructured (e.g., free play), it would not be possible to program a robot in such a way that all possible scenarios can be taken into account. Most importantly, however, in most cases, the educators know little about a robot, its functions and movements and can barely intervene when the technology is working incorrectly. As current social humanoid robots (e.g., the Nao by Softbank Robotics) lack reflective abilities, i.e., abilities to reflect on the correctness and success of their own actions. For this reason, they should not be used in kindergarten settings without human experts to monitor their work and effects.

3.2.2 Liability

Another important legal aspect is the liability for any harm that may be done to the children and/or educators if the robot is not appropriately applied. Currently, social robots can only operate autonomously in quite restricted contexts and their full autonomous behavior in unstructured environments is almost impossible at present. For the future, we urgently need to clarify the question of who (designer, producer, researcher, user, etc.) is liable in what situations if social robots are supposed to be acting autonomously.

3.2.3 Privacy

The use of social robots in a kindergarten could also affect not only the privacy of the children, but also the privacy of their educators and parents. For example, considering the case in which a robot is acting as a peer or companion, it may elicit private information from a child as children readily tell people about what is new and what they have recently experienced [38]. If the robot has the ability to memorize information by, for example, unintentionally recording it, this could violate the child’s and also the family’s privacy. Another privacy issue, especially in a context, in which a social robot is applied to support language learning—but not restricted to this area of application—collecting and analyzing personal data, such as learning gain, emotional engagement, conducted activities, etc., is a crucial added value that might be easily used to adjust and adapt further activities to the amount and quality of interactions that took place in the past. As these types of approaches are already being used in practice [39], and there are legal and normative standards for it, the more complex ethical implications and their consequences in the specific context have generally been disregarded. While research studies have a specific procedure for storing personal data, storing past interaction data raises ethical questions that still need to be resolved, such as who is responsible for the evaluation and what are the (therapeutic or pedagogical) concepts behind it? If the personal learning and interaction data is evaluated, the general results would most likely be used by educators who might not always be aware of the specific scope and limitations of an evaluation. While such evaluations of personal and interactive data could assist educators in adapting activities to the child’s progress, it may provide too narrow an assessment.

4 Children as a Vulnerable Group

Social robots are currently being developed primarily for social groups that are most vulnerable and face particular challenges to make informed decisions or give their consent, such as children. Therefore, it is important to be aware of the limitations in usual practices in using robots in early childhood education and in kindergarten settings. However, we think that the issues discussed below are also relevant to other domains of social robotics with vulnerable groups such as elderly people.

4.1 Participation

As an institution, kindergartens have the responsibility to enable all children to participate in the daily activities and to arrange an inclusive educational setting [40]. Accordingly, to apply a social robot in a kindergarten requires a design that is accessible to all children and considers the diversity of the behaviors, needs, and interests of the children. As mentioned, a deviation from this design could result in discrimination and exclusion of specific children. Especially children with atypical development strongly rely on contingent behavior in an interaction [41]. However, current robots are hardly able to perform contingent interaction. In fact, they are unable to interact sufficiently reciprocal with a child [34, 42]. The reason for this deficit in current technology is likely to be due to the persisting lack of robust speech recognition of a child’s verbal utterance—a prerequisite for contingent interaction. As current automatic speech recognition engines are barely able to reliably recognize children’s speech [43], this may lead to the situation that particularly children with phonological or phonetic disorders are less successful in interaction with a robot and will therefore be excluded from participation. Currently, these technological challenges in cHRI, such as establishing a smooth and contingent interaction between a social robot and the child, are still unresolved, and significant progress in the involved technical fields is required to meet the demands of the heterogeneous population in the institutional context of a kindergarten [34]. As long as the technology is not sufficiently developed, we need carefully developed approaches in order to design the use or the evaluation of such systems in a way that negative consequences are avoided.

Beyond that, in many studies where robots are applied to scaffold the children’s language learning, the way the robot interacts with the children has often been based on adult interactions [42], with a dialogue design relying heavily on a turn-taking behavior characterized by verbal exchanges [44]. However, what is needed are dialogue designs that allow for the peculiarities of children’s communicative behavior; for a prominent example, children make use of nonverbal behaviors to a high degree and often fall back on nonverbal signals when confronted with complex demands [45]. In child–robot interaction, the effects of nonverbal signals are currently being explored [42, 46, 47], and Baxter et al. [48] found that when a robot provided more responsive nonverbal behavior, the children had a more positive impression of the interaction. This suggests that not only are children responding with nonverbal behavior, they are also sensitive to it. However, as the current robot dialogue design is unable to sufficiently consider the multimodal behavior of children, even in teleoperated settings, there may be breakdowns in the interactions [49], which could be particularly prevalent in children with atypical development [50] because their means for participation in an interaction might be limited from the beginning on.

In addition to the design of a dialogue with children, the social role in which the robot is introduced to the kindergarten children brings in an ethical challenge. Consider the case where a robot with a limited repertoire of behavior is introduced as a companion or peer to a child. When the child realizes the robot’s restrictions, they may become frustrated or disappointed [36]. While such disappointments could also result from a child’s interactions with other social entities such as a pet, as outlined by Sharkey [36], robots cannot show real affection like pets and do not rely on the children’s imagination like cuddle toys. Rather, because of their technical features, as the robots appear to be alive, the users tend to anthropomorphize them, which can be seen by phrasings such as “the robot is tired” when the battery is low or “the robot is ill” when it does not act as intended [51]. Although these aspects can also affect laboratory settings, when social robots are used in the kindergarten, there may be other implications. A laboratory setting, in contrast, “frames” the interactions differently: visiting a lab, informing the parents, who are talking to their child about the visit, might be a better preparation resulting in the children having specific expectations and reflections. Again, we are considering the potential negative experience within the social context of the kindergarten as ethically problematic, not the negative experience itself. As a group of our society, children represent a population that is intrinsically heterogeneous and have diverse demands within an interaction [45]. Social robots have to meet these requirements such as an appropriate contingent turn-taking behavior or adequate speech recognition in order to justify their use in kindergartens and to not exclude certain children. Otherwise, the problem may arise that there is a gap between what researchers define as social behavior and what is perceived by those who interact with the robot [52]. Thus, in our view an evaluation of the design in advance is preferable before it is applied in kindergarten settings in an unethical manner.

4.2 Autonomy

A common procedure in many cHRI studies is to call the children out of the group to interact with the experimenter in a separate room, where they are often left alone with the experimenter, without a familiar caregiver. However, younger children in particular have not yet learned to express their own needs to strangers [53], and rely on a caregiver’s emotional support when faced with novel situations [14]. Although strict research ethics oblige researchers to inform a child that she or he can withdraw from the interaction at any time if they feel uncomfortable, it is difficult to recognize whether a child is capable of doing so. In fact, children with specific developmental delays (e.g., children with developmental language delay) may have difficulty expressing their lack of understanding and thus communicating their discomfort. At this point and with our expertise on language and cognitive development, we would like to stress that expressing the consent to participate in an interaction, including those with a robot, involves metacognitive competence, that is, the ability to overview the social situation and perceive one’s own role, needs, and rights; metacommunication, the ability to communicate that the ongoing communication is not desirable; and understanding [54]. However, as kindergarten children are in a very early stage of developing these competences, the researcher cannot categorically state that the child has given their permission and understood that they can withdraw. However, in laboratory situations, for example, a child has a familiar caregiver to “translate” the child’s emotions and intervene when necessary [14]. Therefore, the presence of a caregiver in experimental settings is—to our view—also preferable in cHRI studies. In association with this, methods used to inform and consult with the children that respect their autonomy and vulnerability need to be aimed at their abilities of metacommunication to ensure they are able to make responsible decisions during the interaction, which could be done in a pre-preparation stage in the context of a kindergarten before the child–robot interaction commences.

4.3 Relationships

Finally, human–robot interactions also represent a social relationship. Despite the issue of communicating difficulties, de Graaf [21, p. 594] reports the debate about whether in human–robot interaction, people are actually deceived into thinking that they could establish a relationship with robots over time. While, in adults, this could be accounted by providing an explanation of what the robot is capable of and how the technology is constructed, in children, critical technological thinking is difficult to induce. Of course, it could be argued that in everyday settings people also have the possibility of being deceived by others [21], for children, interactions with a robot could have more severe and long-lasting effects as children are more vulnerable to social influences [55]. More critically, recently, Vollmer et al. [23] found that children are more likely than adults to be convinced by erroneous social robot suggestions. Problems thus arise, because current studies barely provide the possibility for the children to ask and have the robot explained to them. Instead, warm-up sessions and interactions tending to support the impression that the robot is “animate” [29, p. 3]. Therefore, the responsibility of researchers conducting cHRI studies should lead to approaches suggesting how children should be accompanied by familiar caregivers who can afterwards explain and critically reflect the technology with them. Currently, we still lack conception of how such reflections on technology can be realized [54]. With regard to this point, robot literacy concepts need to be developed for young children as well as for caregivers, covering knowledge about robots, reflections about differences between humans and robots, reflective thinking about human–robot interactions, and competence in choosing from a range of possible interactions with robots. While robots are already being used in therapeutic settings (care centers for the elderly), a lack of robot literacy concepts can be found.

5 Role of the Caregiver

When testing in a laboratory environment, caregivers are usually present and can be an additional resource for the child. The role caregivers play in a child–robot interactions was recently investigated by Rohlfing et al. [14]. This research was motivated by the fact that when watching TV, children benefit from adults introducing them to the story line as well as to the information available via this medium. Dubbed “co-viewing”, the involvement of an adult as an interpreter of what a medium transfers and how to use it, is considered as beneficial and was found to increase the learning effects [56]. The fact that an adult can provide a helpful access to a situation and its interpretation is known in developmental psychology. The basic phenomenon of social referencing exemplifies the critical nature of the role of a caregiver, especially for young children. When experiencing novel or unfamiliar situations, children turn to the caregiver and are inclined to align themselves with the emotions that the adult is displaying. As illustrated in Fig. 1, the caregiver encourages her young child to have positive emotions when faced with the robot. To put it in Baldwin’s words [57, p. 135], “when infants as young as 8–12 months of age encounter a new person, object, or event, they will sometimes look toward a parent and subsequently respond to the novel circumstance in accord with the affective expression that the parent displays.” While this effect is well-documented in developmental psychology for young infants, children at the kindergarten age also tend to socially refer when faced with an unfamiliar situation [14]. In these kinds of situations, the adult enfold a specific, regulatory discourse in which they convey their emotional attitude towards the new object or event and address the child’s feelings towards it [58]; therefore, if younger children rely on the emotions of familiar persons to interpret unfamiliar situations, when left without a caregiver, they may feel discomfort. Admittedly, the negative effect will be less in older children. In the study by Rohlfing et al. [14], the data from a child–robot study by Lücking et al. [15] was investigated with a focus on how often children at the age of 4–5 years turn to the caregiver and in what ways the caregiver encourages children to continue the interaction with the robot. It was found that an interaction breakdown occurred in more than 50% of the trials. To a high degree, the failure was caused by the robot not recognizing the child’s face or her or his speech. A closer analysis of these breakdowns revealed that they could be repaired if the robot or the child repeated the question; however, the caregivers also helped out in some of the trials. These results are promising suggesting that younger children can possibly handle some dialogical difficulties on their own. However, one has to keep in mind, that already the presence of the caregiver could have influenced the children and supported the children to have the confidence to find a solution to the occurring problems.

Fig. 1
figure 1

A young child approaching a robot and aligning to the positive emotions of the caregiver

Apart from being a potentially helpful resource in an interaction between the child and the robot, caregivers can also fulfil the role of monitoring the interactive behaviors of the robot [59, 60]. In fact, from the field of Robot-Assisted Therapy (RAT), it is known that a joint triadic interaction between caregiver, child and robot is preferred by parents, teachers, and therapists compared to a mere interaction between a robot and a child [52]. In this regard, recently, Cao and colleagues lent further substance to the role of the caregiver in child–robot interactions by introducing a supervised autonomous robotic system that can be operated by non-experts [59]. The authors implemented a robotic system that allowed the supervisor to control the robot’s actions before they were performed to ensure that only therapeutically valuable actions were executed [59]. Such an approach could also be feasible in nonclinical contexts, which would mitigate the technological bottlenecks that prevent contingent interactions and involve the caregiver in ensuring that the use of the robot is more adjusted to the pedagogical concepts and institutional practices at the kindergarten. In sum, we suggest that the presence of a caregiver or a familiar person is preferable to a child interacting alone or under the mere presence of experimenter(s) in current child–robot studies. Besides, supervising the behavior of the robot by professional staff of a kindergarten could be a way to evaluate social robots within the institutional context, as it might provide a workable alternative in which the educators ensure that the interaction takes into account the specific rules, norms, values, and practices that exist in institutions like kindergartens.

6 Pedagogical Concepts

As mentioned, studies on child–robot interactions have generally investigated ways that the interactions could be beneficial to the children’s learning, such as language learning in interaction with a robot. However, as previously reviewed by Kanero et al. [61], literature provides little evidence for advantages with respect to first or second language acquisition. In fact, the authors state that “no study has found robots to be more effective at teaching words than other digital devices or human teachers, except for the sign language study in which beginners benefited from the physical presence of a robot” [61, p. 3]. Beyond learning individual words, there is some evidence suggesting that especially for individuals with autism spectrum disorder (ASD), social robots foster language production [61]. Overall, the authors review that the support for effective learning with robots is scarce. One reason is that the conducted studies are “often descriptive and exploratory, and do not follow the scientific standards in other disciplines” [61, p. 4]. The standards that are mentioned concern (1) the lack of a proper control group to evaluate whether the robot is more effective at teaching language than other technological systems, (2) too small sample of children to conduct statistical tests, and (3) almost no focus on long-term learning [61]. Facing this state of the art, an urgent question is whether an interaction with a robot can be justified, especially in kindergarten, in which context activities are expected to bring about benefits for the children (as described in Sect. 3.1). Raising this ethical concern, we propose that the learning content should be evaluated before actual interactions in kindergarten settings. Besides the possibility to monitor or supervise the actions of the robot by professional kindergarten staff, this can be done by laboratory studies addressing the learning benefits with a robot in either one-to-one settings or simulating a kindergarten group. A simulation of a kindergarten group can be achieved by inviting 3–6 children to join a small group of children that meets regularly for several sessions. This laboratory solution allows the children to be accompanied by their caregivers, who can interrupt the child’s participation if they consider it as necessary. This solution also allows to control for possible learning biases that can emerge because of the particular atmosphere, learning experiences, or the personality of the educator in a kindergarten group. However, this solution also bears some disadvantages such as the study is more effortful to conduct, because it requires to organize all participants and the activities.

Lastly, whereas social robots are already applied in kindergarten settings, there is little insight into the perspectives of educators and parents on the use of social robots in kindergarten. Research has primarily focused on the opinions and acceptance of preschool, primary or high school teachers towards the use of social robots in classroom [62,63,64], but there is a lack of knowledge about the expectations of the educators and parents about what kinds of interactions should take place between a robot and a child and what could be the content of these interactions. One approach to address this lack of knowledge could be to design and shape the use of the robot in the institutional context together with the stakeholders involved. In the work of Conti et al. [11], for example, the learning content was evaluated in advance together with the educators to ensure that it was appropriate for the children. While evaluations have generally been based on options given by the developers, in the future, stakeholders could be assigned a more active role in designing technology and their options. Systemic, long-term evaluations of these interactions with stakeholders could also provide macroperspective insights into the impact on the institutional environment. In our view, it is crucial to gather insights in this regard in order to render the research responsive to possible concerns of the educators and the parents before social robots can be applied in kindergarten long term.

7 Conclusions

People’s interactions with robots are fundamentally different from their interactions with most other technologies in terms of their social and emotional involvement [21]. When social robots are introduced into kindergarten, two key perspectives need to be considered: on the one hand, the so far focused microperspective with respect to, e.g., children’s learning outcomes, the system usability, a multimodal dialogue design or the context of the vulnerable target group. On the other hand and beyond, inevitably, the macroperspective regarding institutional goals (e.g., ensuring an inclusive setting), the trust in pedagogically motivated values and concepts of early childhood institutions, the involved key stakeholders (e.g., parents and educators), and existing expectations concerning the activities in the institutional context of kindergarten. Instead of focusing mainly on the direct effects of robots on their individual human interaction partners, we have proposed to broaden the view and to critically regard what effects social robots have when taking both the microperspective, i.e., the interaction itself and the immediate social context as well as the macroperspective which addresses social aspects of an interaction within an institutional setting and the related key stakeholders. Thus, the elaborated schema in Fig. 2 seeks to illustrate our plea to widen the scope of the methodology of current cHRI studies by proposing a perspective that provides a binocular view on the social environment of institutional education in which child–robot interactions take place. Accordingly, designers of child–robot interactions should be aware of the ethical, legal, and social implications. cHRI researchers, in turn, should consider the use of robots in kindergarten to be accompanied by an evaluation from a microperspective as well as from a macroperspective and how these dimensions are connected.

Fig. 2
figure 2

Framework on ethical implications from a micro- and macroperspective

Our considerations are based on current studies on cHRI in which the setting of a kindergarten is targeted. We described that an institution like kindergarten fulfills an essential role in the educational landscape and is therefore committed to providing valuable pedagogically motivated settings for children. Its mandate and parent’s expectancy are purely to care for children’s well-being, growth and enhance education. In this regard, we have argued that robots do not only enter our physical environment but also our social environment, and the use of social robots, which is not oriented towards the pedagogical concepts and social norms of a kindergarten, could cause a loss of trust in multiple dimensions: a loss of trust between parents and educators and a loss of trust between children and their educators. Although robots are already being applied in kindergarten settings, there is hardly any knowledge to what extent the interaction between a child and a robot in kindergarten can have an impact in this sense.

In addition to pedagogical concepts, social norms are also related to the way interactions are structured in a particular context. As group interactions are common at kindergarten, current robots are unable to act sufficiently in group contexts, which could lead to the exclusion of certain children. Therefore, warm-ups with the entire group could ensure inclusion and introduce the test situation. At present, there is no policy to guarantee a match between children’s usual routines and the activities taking place within an experimental interaction with a robot. Thus, children cannot predict the partner’s next actions nor can they rely on what they are used to in interaction with their educators. Along the lines and with respect to the specificity of interaction with and among children, we also expressed our concern that current technology is not able to appropriately respond to children’s communicative multimodal behavior, which amplifies the need for further development of child-oriented technology [54].

Several legal implications also arise in relation to the current use of robots in kindergartens. We identified the issues of safety, liability, and privacy, all of which directly affect the educators, the children and the parents. We call for taking the vulnerability of children and their individual differences into consideration: Children in kindergarten represent a heterogeneous group and have diverse demands in an interaction, but so far there is no framework of ethical considerations that corresponds to the integrative approach of a kindergarten and transfer it to a child–robot interaction in order to enable all children to participate. In addition, we have stressed the importance of a familiar caregiver on whom a child is often dependent in novel or unfamiliar situations and who may also be helpful in overcoming breakdowns in child–robot interactions. Lastly, we raised ethical concerns about the use of unevaluated learning content in kindergarten because of its vague educational effectiveness, which could undermine the educational goals of the kindergarten. We argue that before commencing any evaluation study with children letting them interact with a robot in a kindergarten, one should consider a workable alternative, such as conducting the study within a laboratory setting simulating the context of a kindergarten and evaluating the design of the interaction and the related consequences from a micro- and a macroperspective. To put our point in other words, it is not a question of calling all the research in this field back to the laboratory, but to ensure in advance that the use of a robot as an interaction partner within the institutional environment is geared to the specific context with its institutional social norms, values, and concepts. We see this as an important challenge for the community of child–robot interaction to develop new approaches, such as the ways proposed here of evaluating the interaction in advance in the laboratory, involving familiar caregivers or educators, who might also supervise the interaction of the robot, designing the use of the robot with the involved stakeholders or establishing new forms of warm-up activities that familiarize the children with the technology and reduce false expectations.

Whereas our ethical concerns focused on the research here, we also see an important challenge for new developments in pedagogical concepts for kindergartens that might establish exploration of technology as a scientific activity that becomes a stable part of other daily activities in the context of kindergarten. Such developments, however, need to open up to critical technological thinking and conceive scientific activities as moments of not only discovery and exploration but also reflection and product experience. Then, within such a slot, a child–robot interaction would be an activity, in which the child’s role (to, e.g., evaluate the interaction) is clearly marked and involves a critical examination and reflection of an experience with a technological device. The roles, however, needs to be first established to benefit from them.

Taken together, our article expands perspectives on ethics about children’s interactions with robots and argues for a critical reflection on the use of robots in kindergarten, in a socially, legally, and ethically responsible way. It explains the necessity for more research on ethics about child–robot interaction and for ethics about development processes when designing or evaluating such technology. With our considerations presented in this paper, we purse the aim to stimulate further discussion and further technological developments.