Introduction

As a consequence of recent developments in Artificial Intelligence (AI), Machine Learning (ML), and advances in sensor and micro technologies, robots have become part of numerous areas of life and work. In addition to their primary use in industry and the military, they today serve as assistance systems to help with everyday tasks,Footnote 1as companions in geriatric care,Footnote 2 or simply as technical substitute for pets.Footnote 3 While robots formerly were mainly used in industry where they performed risky and repetitive operations and thus functioned as useful tools, they are now – according to the rhetoric of technology developers and media reporting – more and more becoming intimate interaction partners. So-called social robots are central to this context, as they seem to possess characteristics that were previously only attributed to humans: due to their appearance and functionality, they appear intelligent, emotional, and autonomous.Footnote 4 This gives rise to narratives that assert a similarity or even equality between humans and machines (cf. here, for instance, transhumanism [4, 5]) and has considerable consequences for the interaction between humans and machines: People no longer use social robots exclusively as artifacts and instrumental tools, but rather encounter them as a social counterpart.Footnote 5 Crucial to this is their anthropomorphic design and their tendency to imitate human modes of interaction and to simulate socio-emotional behavior. The phenomenon of imitation, which is discussed in this article, thus refers to the complex of anthropomorphic technology design, as it is crucial especially in the field of social robotics (cf. here, among others, [8, 9]).

Against this background, the article examines how social robots perform intelligent, emotional, and autonomous behavior and how these features differ from human forms of emotionality, intelligence, and autonomy, and thus from the conditions we usually assume necessary for human reciprocal interaction. The paper thus takes its starting point from the assumption of a difference between human behavior and technical or robotic modes of operation – an assumption that seems intuitively conclusive, but will be also justified on a conceptual level in the present article. The question of how human and robotic modes of behavior and operation differ is thereby motivated by the following reasons: a) Even if the difference between human and robotic forms of interaction seems intuitively conclusive, it needs to be conceptualized in order to understand what we are dealing with in the case of social robots, on the one hand, and to better understand our relationship to robots, on the other hand. b) The question posed here helps refute hasty analogizations and narratives that attribute capabilities such as reasoning, emotionality and sociality to robotic systems – as can be observed, for example, in media coverage, in the advertising of technological products, but also in technophilic discourses such as transhumanism [4, 5]. Here, robots are ascribed abilities that they do not (yet) possess; on the other hand, the narrative of an equality of humans and technology is constructed, which must be critically countered on a conceptual level. c) Forms of interaction can be identified in concrete human–robot interaction (HRI) in which the difference between humans and technology seems to be disappearing. Robots are increasingly taking on the role of equivalent or human-like interaction partners, making the question of how humans and robots differ all the more urgent.

The article thus starts from an anthropological-ethical perspective: on the one hand, conclusions can be drawn about the human self-relation and world-relation from the interaction with robotic systems. On the other hand, following the question investigated here, it can be better assessed whether the use of robots in social contexts – such as the care sector, school education or the therapy of people with impairments – is recommendable. Only marginal reference is made to empirical studies, since the topic of human–robot interaction has already been extensively researched on an empirical level and since the article is intended to provide a conceptual contribution to this discussion. With this in mind, we start by analyzing the technical operations based on which social robots imitate human-like behavior. Subsequently, we describe how emotionality, intelligence and autonomy are usually conceptualized in humans. Thus, we combine an analysis of the technical processes realized in social robotics with a discussion of the philosophical and psychological concepts important in discussing human interaction. Emotionality, intelligence, and autonomy are not treated here as sole or sufficient but as necessary features of human interaction, which play a central role in the context of HRI. Imitation, on the other hand, refers to the complex of anthropomorphic technology design with its imperceptible human–machine interfaces through which the acceptance of technical systems is to be increased and the use of technologies is to become as unobtrusive and intuitive as possible. Our aim is thus to make a conceptual contribution to the intuition that the technical imitation of emotional, intelligent, and autonomous behavior differs qualitatively from human forms of emotionality, intelligence, and autonomy. Therefore, we conclude that caution is advised, as anthropomorphizing robots can lead to false assumptions of reciprocity.Footnote 6 With the notion of reciprocity, we refer to sociological and psychological theories in which reciprocity describes a necessary condition of social relationships. Thus, reciprocity can be explained epistemologically, for instance if, as in Mead [13], the "reciprocity of perspective" is regarded as essential for mutual understanding. On the other hand, reciprocity can be understood as an essential condition of moral relations of recognition [14]. Reciprocity here refers to the fact that interaction partners recognize each other as persons who depend on mutual recognition. The meaning of reciprocity may thus vary depending on the underlying theory, but in most cases refers to a necessary condition of social interactions. However, before we turn more specifically to the complex of imitation, reciprocity, and human conditions for interaction, we first take a look at the field of healthcare, since healthcare is one of the main areas of application in which the use of social robots raises controversial questions about the role and relationship between humans and machines.

Social Robots in Healthcare

In contrast to their industrial predecessors, social robots are designed for applications beyond industry and are characterized by more human-like features and better interactive skills. They are designed to read and interpret human emotions, to display situational awareness, and to express signs of emotional and cognitive involvement. This is realized by a behavior-based approach that has characterized robotics since the 1980s: robots were no longer considered purely useful tools, but were rather designed as flexible, dynamic, intelligent, and adaptive systems, which were intended to interact with their environment and to adapt to their user’s needs. Against this backdrop, social robots were created to develop cognitive, social, and emotional skills, which would enable them to display adaptive behavior and respond to the needs of their users (See among others [1, 2]). The goal of this development is thus to foster increasingly intimate, intuitive, and frictionless human–machine interactions. The use of robotic systems should be as easy and unobtrusive as possible so that even older or ill people are able to use robots for their purposes and in accordance with their needs.

In health care, social robots are mostly used for therapeutic purposes (both physical therapy and psychotherapy [15]), as well as in care facilities such as, for instance, geriatric care [16]. Social care robots are designed to both assist caregivers in everyday tasks such as washing and transferring patients, and to entertain patients and encourage their social interaction. For example, some social care robots, such as the humanoid robot “Myon,” are programmed to motivate patients to sing and dance, to entertain them by playing music, or to stimulate their cognitive abilities through memory games.Footnote 7 Other models, such as the robotic seal “Paro,” perform the function of an artificial and sterile pet that, through touch and acoustic responses, is intended to relieve loneliness and provide missing social stimuli.Footnote 8 Social robots in healthcare are capable of processing information (“thinking”) as well as of adapting their behavior (“learning”). They possess mimetic, gestural, and linguistic abilities, by means of which they are supposed to establish and, at best, maintain a relationship with their human counterpart, thus creating the impression of being emotional, social, intelligent, and autonomous beings ([16], p. 202).

A further distinction can be made between humanoid and non-humanoid social robots. While humanoid social robots have human-like features, non-humanoid social robots mostly appear in the form of animals, whereby the interaction with their users is confined to non-verbal communication (see for example the robot seal “Paro,” the robot dog “AIBO,” and the nursing bear "Robear”). With non-humanoid social robots, interaction is often limited to the tactile dimension and encompasses mostly touch, as well as physical and auditory responses from the robot. Humanoid social robots, on the other hand, not only have a more human-like appearance, they are also capable of communicating verbally by using, among others, natural language processing (NLP) and speech dialogue modules [18]. Hence, the technical requirements for humanoid social care robots are high: they must – in addition to the usual requirements for robots such as a stable stand, fluid motion, and finely tuned motor skills – first of all exhibit an authentic, empathic, and consistent behavior. Secondly, they need to be able to remember previous interactions ([19], p. 68), and, thirdly, be capable of adapting to changing circumstances, as well as to the needs of their human counterparts ([19], p. 65). Thus, in addition to the generally required actuators, sensors, and motors, humanoid social care robots operate on the basis of image processing algorithms, NLP, methods to model and express emotions, as well as ML. In order to provide a better understanding of the imitation of human characteristics in social robots, we will in the following analyze a robot model that has been available on the market for several years, that has already been subject to extended empirical research and can thus be used to better illustrate the field of humanoid social robots: the robotic system "Pepper."

Pepper was launched in 2014 and has been available on the market since 2015. Pepper is used in care facilities and hospitals, but is also applied, among others, in sales, hotels, banks, and education. In healthcare, Pepper works as a health assistant, communicator, data generator, and receptionist.Footnote 9 It consists of a variety of sensors and motors and can be connected to the Internet via WiFi or Ethernet. Pepper has a human-like appearance and can communicate with its human counterpart not only through touch but also through natural language communication, as well as through a tablet installed in its chest.

Pepper seems sentient and emotional by different operations. In addition to body movements and sound, Pepper transmits information about its “emotional” state via natural language communication, a touch screen in its chest and LEDs behind its eyes. With these LEDs, Pepper is able to demonstrate whether it is listening (blue), processing information (green), or not “being there” at all (white),Footnote 10 thereby displaying situation awareness and imitating attentiveness and emotional involvement. Above all, however, Pepper displays emotional and social behavior through its advanced abilities to remember and communicate. Thus, the robot can remember previous interactions and linguistic content. The impression of emotionality and social behavior is further strengthened by Pepper's ability to interpret or decode facial expressions, gestures, and the tonality of its human counterpart. Pepper uses emotion detection and facial recognition and responds to the detected information with corresponding gestures, sentences, and movements.

Moreover, Pepper appears as a rudimentarily intelligent and autonomous device that imitates the understanding of its human counterpart. While Pepper appears to be sentient due to a variety of sensor technologies, the impression of intelligence is signalized by NLP and emotion detection, and manifests itself in the capability to remember previous interactions and to perform AI-based communication. Besides speech recognition that enables Pepper to communicate via dialogue, it is able to focus its attention and has a certain degree of awareness of its situation and environment [20]. In addition, Pepper also appears to be autonomous to a certain degree. In contrast to humans, however, “autonomy” here refers mainly to the ability to move independently, to detect objects in the immediate surroundings, to sidestep them and thus avoid collision. Autonomy is thus performed through body posture identification, self-configuration recognition and self-body awareness, among others [21]. In other words, robots like Pepper have modules that allow them to confront their human users as apparently independent beings. In addition, Pepper conveys the impression of being intelligent and autonomous by its ability to learn. By means of learning algorithms and machine learning methods, Pepper is capable of adapting to its user’s preferences and needs and is able to imitate the user’s behavior. Every interactive setting thus provides the robot with new data, based on which it can improve its interactive, social, and linguistic skills.

Differences in Humans and Robots regarding Emotionality, Autonomy, and Intelligence

In the following sections, we analyze how autonomy, emotionality, and intelligence are used differently in the context of humans and robots by elaborating how these qualities may be conceptualized in humans. In doing so, we do not understand autonomy, emotionality, and intelligence as exhaustive properties of human interaction, but rather as necessary features of human interaction that are closely related to epistemological and ethical concepts of reciprocity. Moreover, they are central in the context of an anthropomorphic technology, insofar as they are crucial to the “human likeness” of social robots. Further, the selection of these three features allows to pursue the question of how human and robotic modes of interaction differ on the basis of three different criteria and thus offers impulses for further analysis. With this in mind, we refer to selected philosophical and psychological debates in which emotions, intelligence, and autonomy are analyzed as prominent human characteristics. Subsequently, we examine to what extent these concepts can be applied to social robots.

Emotionality

For a long time, emotions were understood in opposition to rationality and reason. In current debates in the "philosophy of emotions,” however, the relationship between rationality and emotions is being reconceptualized. Two lines of thinking can be distinguished here: cognitivist and phenomenological theories of emotion. Cognitivist theories define emotions as cognitions or mental states such as thoughts, beliefs, or evaluations [22,23,24,25,26].Footnote 11 Hence, emotions are said to have a propositional structure and, on this basis, are asserted to be structurally analogous to rationality. Phenomenological theories of emotion, however, object this. They criticize the reduction of emotions to mental states and emphasize the bodily relation of emotions ([29], p. 21, 28), as well as their quality of experience. Hence, according to phenomenological theory, it is primarily the (bodily) experience that distinguishes emotions from other mental phenomena.Footnote 12

Thus, the realm of emotions is not clearly defined [31]. Further, in the “philosophy of emotions,” a distinction is often made between emotions and feelings [32, 33]. In this sense, at least five characteristics of emotions can be defined. Emotions are – in contrast to feelings or mere bodily and sensory sensations – intentional (1), have a representational and evaluative character (2), a motivational function (emotions motivate to act) (3), possess a bodily component (4) and an experiential quality (5). A decisive criterion of differentiation between emotions and feelings is therefore their intentionality. Emotions differ from feelings or bodily sensations such as pain, pleasure, cold, or warmth first of all by their intentionality, i.e., by their directedness to an object (1) [30]. This results in another property of emotions: their representational and evaluative character. The object to which emotions refer is, qua this reference, an "evaluated" object. Accordingly, emotions represent their intentional content and confer a value on the object to which they are directed. For example, the dog I am afraid of becomes a frightening animal (for me) through my emotion of fear [29, 32]. Furthermore, emotions have a motivating function (for instance, emotions motivate us to act) (3) and a physical component (4): they manifest themselves in facial expressions and gestures and are often accompanied by physical reactions such as sweating, blushing, and others. And lastly, emotions have an experiential quality (5). Thus, emotions not only evoke physical reactions, but are also experienced in terms of a lived-body or a body-subject, i.e., for example, as feelings of constriction, tension, widening or opening ([34], p. 22, 222).

Against this backdrop, the question arises to what extent social robots can perform or imitate emotional behavior. For example, understanding emotions as physiological or biochemical processes and conceptualizing them within the framework of stimulus–response schemes could lead to the assumption that machines can, in principle, acquire emotions. If in the future, for instance, Pepper was able to read and interpret the facial expressions, gestures and moods of its human counterparts as stimuli, and respond with corresponding emotional reactions, it could be assumed that Pepper has a form of emotionality comparable to that of humans. However, this assertion appears problematic if one considers the premises of this assumption. Thus, the understanding of emotions that defines them as cognitive states or bodily sensations presupposes the existence of mental states, i.e., of consciousness. The question of whether social robots or AI-based technologies in general will one day be able to develop consciousness-like states is, however, doubtful and highly controversial [35]. Moreover, the aforementioned assumption becomes even more critical if one considers the phenomenological understanding of emotions, which situates emotions within the relationship of the self and the world, and thus links emotions to the concept of corporeality. For even if social robots already show a certain degree of situational awareness, and even though research is currently being conducted to implement self-awareness or bodily awareness in robots [21, 36], it still seems that there is a long way to go before felt corporeality and an existentially experienced self-relationship can be achieved, if at all.Footnote 13

Autonomy

Autonomy, on the other hand, is one of the central concepts of ethics and political theory and covers a wide range of meanings. It can refer to self-rule, self-determination, self-knowledge, absence of external causation, and free will ([37], p. 267). At the same time, autonomy can be understood as a capacity, a competence, a constitution, an ideal, or a right [38]. Etymologically, autonomy refers to the capacity for self-legislation (autós: self, nómos: law). In this meaning, the term also found its way into Kant's philosophy, where autonomy refers to the human capacity for self-legislation, from which the demand is derived to make actions dependent on the generalizability of their maxims. Moreover, it is often emphasized that autonomy means freedom from external coercion or domination by others. A person who is autonomous in accordance with this understanding is able to make choices and decisions based on their own preferences without any major interference [37]. A third approach defines autonomy as personal autonomy and thus as the ability to act and live according to one's own reasons [39]. Here, the focus is on the ability to critically evaluate one's own beliefs and identify with higher-order volitions. In this context, authenticity or the coherence of a set of beliefs, long-term values, or life plans is considered a condition of autonomy [39,40,41,42]. Another approach conceives autonomy as relational autonomy. Here, the traditional concepts of autonomy, which are based on the notions of self and identity, are criticized; instead, the importance of relationships and interdependencies for the constitution of the individual self is stressed [43,44,45]. The autonomous self is thus not one that precedes interaction and relations, but rather one that is always influenced and shaped by other individuals and entities, be they human or non-human. Relational theories of autonomy thus criticize the idea of an “atomistic” and self-sufficient self, and argue that identity is relational, i.e., pluralistic, and exposed to others [46].

What does this mean for social robots? In what ways can we speak of social robots as autonomous? Although the term “autonomous devices” is often used in connection with new technologies (cf. the notion of “autonomous” vehicles, weapons systems, or robots), this terminology is revealed to be misleading when we take a closer look at its underlying concept. Nevertheless, the concept of autonomy harbors a certain ambivalence regarding AI-based technologies, insofar as these technologies – especially in the field of robotics – seem to exhibit a certain degree of self-activity. If, for example, Pepper performs tasks independently and optimizes them through repeated execution, it appears to act free of external coercion and thus “autonomously." However, insofar as Pepper's scope of action depends on its programming, Pepper is ultimately still subject to external – i.e., human – legislation.

If, however, one considers autonomy as the ability to act and live according to one's own reason, it is conceivable that social robots could form a hierarchy of values and follow higher-order values (such as, for instance, the value of politeness), even while making individual decisions. Or, at least, social robots could aim to achieve a certain degree of coherence in individual decisions and the pre-programmed higher-level values. However, this would still not be comparable to those forms of reflection, evaluation, and identification that human individuals undertake in terms of their personal autonomy, concerning their beliefs and value judgements. The latter, in fact, presupposes a lively interest in one's own desires, value attitudes, and life plans.

The concept of relational autonomy would ultimately be most applicable to social robots, as it emphasizes the role of interactions in the formation of the self and identity and, thus, for autonomy. Thus, the transferability of the concept of autonomy to social robots seems plausible, but still only to a limited extent, since autonomy is inextricably related to the concept of vulnerability [47]. Vulnerability, however, refers to a moment of existential exposure and social interdependency at least to a certain degree that is inherent only to corporeal beings.Footnote 14 Even though the concepts of embodiment and situatedness determine the construction of robotic systems, social robots are still technically constructed bodies that lack the experience of contingency and interdependency. Unlike the living human body, social robots are replaceable, exchangeable, and not subject to finitude [48].

Intelligence

Intelligence is one of the main characteristics of human existence, and it plays a decisive role in situations of interpersonal interaction. In the history of philosophy, intelligence has been elevated to the defining characteristic of human beings, distinguishing them from other entities such as animals, plants, and machines (cf., among others, the definition of “human beings” as animal rationale or zoon logon echon). However, this line of thinking had been challenged at least from two directions by the twentieth century. Firstly, ethology and comparative behavioral research proved that animals also demonstrate intelligent behavior. Secondly, research on artificial intelligence from the 1950s onwards considered the possibility that “machines can also think” [49]. But what is intelligence and to what extent are human and machine intelligence similar or different?

If one looks for a definition of “intelligence,” one encounters a multitude of different approaches. Intelligence is, for instance, defined as the ability to adapt to new situations, as the competence to efficiently solve problems, to learn from experience or as symbol manipulation.Footnote 15 Some theories assume a general intelligence factor that underlies all intellectual performances and represents the cognitive performance of an individual (two-factor theory) (cf., for example [51]). Against this backdrop, Raymond B. Cattell distinguished between two different forms of intelligence: a fluid intelligence, and a crystallized (general) intelligence. While fluid intelligence includes all those abilities that are innate, rather than socially or culturally conditioned (such as general comprehension), crystallized intelligence refers to experience-based abilities and thus encompasses explicit knowledge (such as factual knowledge), as well as implicit knowledge (such as body techniques).

In general, factorial, multidimensional, and process-oriented theories of intelligence can be distinguished. Multidimensional intelligence models, as formulated by Joy P. Guilford or within the framework of the Berlin Intelligence Structure Model, understand intelligence as a collection of different abilities that are differentiated into complex systems.Footnote 16 In doing so, they provide a counter-model to the hierarchical approach of older factorial intelligence models. Thus, Robert Sternberg abandons the assumption of a general intelligence and shifts the focus from a hierarchical concept of intelligence to the study of information processing [53]. In contrast to the hierarchical model of factorial theories of intelligence, Sternberg distinguishes between three different components of intelligence (analytical, creative, and practical intelligence) and at the same time adds two further aspects to the concept of intelligence: “the ability to adapt to a continuously changing environment” and “the motivation to acquire knowledge” [54]. Against this backdrop, Howard Gardner, in his “Theory of Multiple Intelligences” claims seven different “intelligences” which are interrelated but still relatively independent of each other [55]: verbal, logical-mathematical, spatial, musical, bodily kinesthetic, intrapersonal (self-reflection), and interpersonal (social skills) intelligences.Footnote 17

How can this be transferred to the field of social robotics? In view of the various models of intelligence, as they have been elaborated on since the beginning of the twentieth century, a transfer of the concept of intelligence to technical systems seems possible. In particular, information processing theories, which attribute intelligent behavior to information processing, seem to suggest a proximity of human and machine intelligence (see, e.g., Sternberg). Similarly, the concept by Allen Newell and Herbert A. Simon, which explains intelligence as the manipulation of symbols or symbol processing, seems applicable to both human and machine entities. One reason for this is the functionalist perspective, according which intelligence is based on information processing, processing speed, and numerical abilities.

Despite the conceptual proximity that the field of machine intelligence shares with its natural model, it is striking that “natural” forms of intelligence encompass a much broader scope than their technical simulations. While technical intelligence is primarily specialized in a specific area, human intelligence expresses itself in more diverse ways: for example, as social, intrapersonal, bodily kinesthetic, or even musical intelligence [55]. Human intelligence therefore does not consist exclusively of logical reasoning and information processing but can also be expressed through visual, emotional, or social abilities. Moreover, human intelligence is more error-prone, unspecialized, and thus in some ways more irrational than its technical counterpart. It is by comparison versatile, open, and unspecialized [57]. This may, however, change in the field of social robotics, as social robots could in principle develop certain forms of interpersonal and embodied intelligence through their interactive skills and embodiment. However, intrapersonal intelligence, i.e., the ability of self-reflection, still seems a long way off for social robots. In addition, human intelligence is always bound to a biological body, which is inherently temporal, and has, moreover, a close proximity to consciousness and thought. Thus, human intelligence and technical intelligence ultimately refer to very different phenomena, while the question of technical intelligence points to that of artificial consciousness. Just as artificial consciousness, if it were technically realizable, would in all likelihood be fundamentally different from human states of consciousness, technical or artificial forms of intelligence also describe an entirely new phenomenon.

Implications for Human–Robot Interaction

As we have shown, there are major differences between human emotionality, autonomy and intelligence, and the characteristics that social robots exhibit. This has far-reaching implications for human interactions with social robots. On the one hand, the human-like yet different characteristics of social robots lead to increased acceptance and a higher degree of usability of the technical systems. On the other hand, the technical imitation of human characteristics and modes of interaction can have problematic implications for assumptions about interpersonal relationships.

Especially in the healthcare sector discussed earlier, social robots that are autonomous, problem solving, and responsive to the emotional needs of humans can lead to improvements and innovations. In the treatment of dementia patients, for instance, social robots can help to relieve nursing staff by taking over routine tasks such as transferring, carrying and washing patients, administering medication, or taking blood samples. At the same time, they can bring about new forms of therapy and allow for new methods of monitoring and communication [58, 59]. In addition, there have recently been more and more studies investigating the use of social robots in autism therapy. Here, it is observed that children with autism seem to engage more easily with a robotic system as a social counterpart [60]. While interaction with human subjects is often stressful and overwhelming for the children, they face a lower level of social stimuli and demands with social robots. In this way, one takes advantage of the fact that social robots have so far only limited communication and interaction capabilities, which is why interaction with them is less complex and therefore easier to process and understand. Social robots thus function as a practice tool that children with autism can use to gradually approach the field of complex social interactions.

This is made possible by their human-like, yet different design and interaction mode. Thus, several studies have shown that children in robot-assisted autism therapy were able to improve their communication and interaction skills as well as their understanding of emotional responses [61, 62]. Hence, new interaction scenarios can be observed in the use of social robots in care and therapy that are characterized by increasing closeness, intimacy and trust between humans and technology. Further, social robots can become "devices of trust" especially for patients with cognitive or emotional impairments, compensating for the lack of social interactions and providing patients with missing or adequate social stimuli. On the other hand, however, imitating human behavior also has problematic effects. The problem, however, does not so much lie in the different manifestations of the characteristics discussed here, but rather in the simulation itself. Because robots merely imitate autonomous, emotional and intelligent behavior, i.e., they have no intrinsic social motivation, people can be deceived about the fact that they are interacting with a technical system [63]. In other words, the use of social robots in care involves the risk that users do not realize that the interaction with the robotic systems is algorithmically pre-programmed, i.e., that robots do not respond to their counterparts in a responsive manner [64, 65], but merely follow preset programs. Although this argument has been criticized in some contexts (cf., for example, [66]), it seems to remain relevant in the field of care and therapy, where social robots encounter people with physical, emotional, or social impairments. For if robots do not behave socially of their own accord, but only on the basis of their programming, it means that they ultimately perceive only those social signals that they can measure, calculate, and process qua their programming, as a result of which both sympathy and attention as well as spontaneity and flexibility in care and therapy are likely to decrease.

In addition to these implications, which are problematic especially for the care sector, the imitation of human behavior and the paradigm of human likeness in technology design could also have problematic effects on a more fundamental and general level. Thus, it could challenge the intuitively shared basis of interpersonal interaction and reinforce tendencies already observed in other technological contexts, such as the phenomena of “automation bias,” for instance. Automation bias – the tendency of people to trust automatically generated decisions more than human decision-making processes – is particularly relevant in the field of social robotics: first, because social robots appear as quasi-independent, embodied and “understanding” entities through their ability to imitate and mimic human behavior; and second, because social robots are mostly used by vulnerable groups whose critical judgment can be limited. In this context, studies have shown that, especially in the field of social robotics, people are more likely to be influenced in their individual judgments by robots than by humans ([67], p. 41). The increased trust in automated decision-making processes observed here could thus lead to a loss of trust in human decision-making and to new forms of manipulation.

Moreover, the imitation of human characteristics could decisively change the shared basis of human interaction. Thus, it is crucial for human–human relationships to assume that the social counterpart is capable of being emotional, autonomous, and intelligent and that this applies to all interaction partners, i.e., that interaction is characterized by reciprocity. However, we cannot assume that the emotions people demonstrate in situations of interpersonal interaction are necessarily real or similar to our own emotions. Nevertheless, interpersonal interaction is based on the assumption of a fundamental similarity regarding the disposition of being emotional, intelligent, and autonomous. If the basic assumption of fundamental similarity and reciprocity is now transferred to artifacts that merely mimic human properties, the common basis of human interaction could change decisively. For example, dealing with social robots that are always obedient and never show interests, wishes or needs on their own could reduce the willingness to engage in complex and conflictual interactions [68, 69]. To the extent that social robots exhibit socio-emotional behavior but always adapt to the needs of their users and never appear with divergent interests and demands, interacting with social robots – assuming an all-encompassing use of such systems – might lead to a decrease in the willingness to deal with unruly and non-adaptive behavior, i.e., to acknowledge the conflictual and demanding dimension of social relationships.

Conclusion

Humans have always interacted with non-human entities, transferring certain human characteristics, such as emotions or motivations, to non-human entities. This ranges from animism to assumptions about divine human likeness, from human-like animals in fairy tales and fables to childhood games with dolls and cuddly toys. What is new in the context of social robots, however, is that the tendency to anthropomorphize non-human entities is brought to a technological level and is thus deliberately reinforced. Anthropomorphism migrates into the devices and is reflected in their design and modes of operation. However, as has been shown, the resulting forms of simulated emotionality, intelligence, and autonomy differ greatly from their human model. This, however, may lead to false, reinforced, and confusing assumptions regarding the reciprocity and basic assumptions of social interactions. Therefore, further studies are needed that critically examine and evaluate the potential risks of false assumptions about reciprocity and social robots as interaction partners. This is particularly important for the use of social robots in healthcare, i.e., among vulnerable groups such as children or the elderly. Thus, educating individuals and society about social robots is a task that will become increasingly important as the development and use of social robots becomes more widespread.