1 Introduction

The 2017 BBC 3 documentary “Can Robots Love Us?” deals with societal applications and consequences of recent progress in robotic technologies. One of the fields of implementation discussed is eldercare and the campaign against loneliness among older people in the UK. In a short sequence, 83-year-old widower Bill is asked whether a robot could help relieve his permanent loneliness. Interestingly, Bill’s first question is not concerned with costs or functionalities but with whether such a robot would be “male or female” [1].

This example appears significant in several respects. First of all, it reminds us that the relevance of robotic technologies in the ethically sensitive field of eldercare will increase in the next decades due to population aging, the erosion of traditional familial care structures, and the persistent shortage of professional caregivers. For example, care robots are developed to take over or facilitate arduous care tasks such as washing, lifting, or repositioning patients. And social robots are supposed to function as companions in order to relieve loneliness and facilitate communication, social interaction, or entertainment.

Furthermore, the short sequence also illustrates how common social and cultural norms and attributions can influence human-robot interaction in eldercare. Gender is a particularly prominent case in point. An increasing number of socio psychological studies indicates that technology development and human-machine interaction are influenced by gender stereotypes (for an overview see [2]). A prominent example are virtual voice-operated assistants such as Amazon’s Alexa, Microsoft’s Cortana or Apple’s Siri. With robotic systems in eldercare, this trend has continued. Thus, care robots like RIKEN’s ROBEAR or Fraunhofer’s Care-O-bot 3 often show masculine attributes, such as strong ‘muscular’ arms or sturdy torsos. By contrast, many social robots use a soft voice that may evoke a comforting female caregiver, like Catalia Health’s Mabu. As gender roles among older people become more diverse and the diversity of their needs and preferences receives more recognition, such stereotypical approaches could pose significant challenges to the development and implementation of robotics in modern pluralist societies. For example, some point out that stereotypical robots may not meet the needs of all older users and even reinforce prejudice and discrimination “by systematically influencing perceptions, interpretations, and judgments” [3 p. 7]. In this vein, gender stereotyping of robotic systems in eldercare can have far-reaching and sometimes rather problematic consequences for individuals as well as for society at large. It raises moral concerns regarding individual wellbeing and social justice that need systematic ethical consideration in order to establish a socially accepted and morally acceptable utilization of robots in eldercare [4].

In this paper, we provide an exploratory ethical analysis of the moral issues associated with gender stereotyping in robotics for eldercare.Footnote 1 We first give an overview on the state of research regarding gender stereotypes in human-robot interaction and also take into account pertinent studies on the special field of care robotics for older people. Starting from a principlist approach comprising ethical principles of autonomy, non-maleficence, beneficence, and justice, we then map possible moral problems and conflicts such as the tension between a potentially beneficial impact of stereotyping on user compliance and wellbeing on the one hand, and its implications for modern pluralistic values such as diversity and mutual recognition on the other. We consider possible solutions for the development and implementation of morally acceptable robots for eldercare in modern pluralistic societies, focusing on three different perspectives: explanation, neutralization, and queering of care robots. Finally, we discuss the potentials and problems of these three strategies and conclude that especially the queering of robotics and the idea of a gender-fluid robot represents a new, innovative outlook that deserves closer ethical, social, and technological examination and elaboration.

2 State of Research on Stereotyping in Robotic Care

Since robots are becoming more relevant as quasi-social agents and counterparts in human-machine interaction, there has been increasing public and academic attention to stereotypes in the context of robotics. Psychologists define stereotypes as “associations and beliefs about the characteristics and attributes of a group and its members that shape how people think about and respond to the group” [3 p. 8]. The pervasiveness of these associations and beliefs is rooted in the widespread tendency to define others as members of social groups or in connection to social categories or characteristics [5, 6] in order to reduce uncertainty and provide orientation in social interaction.

When an individual is identified as a member of a specific group by virtue of a pertinent cue, attitudes and reactions are often based on (implicit) knowledge about typical members of this group. This knowledge can comprise traits, typical competences, and other qualities [3], and is influenced by one’s own cultural and social background. In this way, “[s]tereotypes reflect general expectations about members of particular social groups” [7 p. 276]. Such stereotypical attributions are often implicit and very stable.

While behavior that deviates from stereotypes is mostly neglected or ascribed to situational aspects, stereotype-consistent behavior reinforces the stereotypes [3]. In this sense, stereotypes seem to function as a “filter” [7 p. 282] of one’s perception. At the same time, the stereotyping expectations – often implicitly – influence other people’s behavior and thus become self-fulfilling prophecies [3]. In this way, stereotypes not only function as neutral prognoses of people’s behavior but set normative expectations about how a good group member should behave [7].

The mechanisms and effects of stereotyping in human-machine interaction have been widely examined in recent years. Thus, it is well-known that anthropomorphic views and perspectives pervade human attitudes towards and responses to technology [8]. Accordingly, socio psychological research has shown that social categories from human interaction are also applied in human-robot interaction and work in a similar way in this context. For example, several studies indicate that the presumed social group membership of a robot is relevant for its evaluation and that users prefer to interact with robots that appear to belong to their own group [9,10,11].

When it comes to stereotyping, the category of gender plays a particularly prominent role: While other group memberships may change, gender remains an important category in most situations and is usually present at least at an implicit level [7]. This is also mirrored in recent socio-psychological research on stereotypes in human-robot interaction which shows a special focus on gender as a social category (e.g. [2, 12, 13]). While there are also individual studies on aspects of class or race [14], a broad line of research concentrates on users’ perceptions of a robot’s gender, the relevant cues and markers, and the effects of such a gendered perception of robots (e.g. [2, 15]). The findings make clear that these markers can be categorized as morphological, vocal, and behavioral cues, as well as individual-related information, e.g., name and corresponding pronouns [16].

As physical appearance is considered to be essential for defining an individual’s gender [17], morphological cues play an important role in human-robot interaction. Even minor morphological cues can already create a gendered perception of a robot. The robot’s body shape is one example of a morphological cue. Thus, several quantitative online studies with samples ranging from 107 to 150 participants show that waist-to-hip ratio and/or shoulder width can influence the perception of a robot as male or female [18,19,20]. Additionally, facial cues matter. As one of the first studies dealing with this issue, Eyssel and Hegel found in a quantitative design with 60 participants that a short-haired robot is perceived as more masculine than a long-haired one [12]. Interestingly, robots without deliberately chosen gender cues are often identified as male [21].

Furthermore, several studies point to the significance of the robot’s voice and vocal cues [15, 22]. Two aspects are crucial here. First, so-called gender-neutral voices are more often identified as male than as female or neutral in empirical tests [23]. Second, if the voice is clearly identified as male or female, the test person makes gendered assumptions about tasks and competences of the robot that correlate with stereotypical occupations and competences of men and women [16, 24, 25]. For example, Tay and colleagues demonstrate that a robot with a male voice is perceived as more suitable as a security robot than a robot with a female voice [25]

Such vocal cues are closely related to behavioural cues like communication style that are usually based on the robot’s programming and also have effects on human-robot interaction. For example, Kraus and colleagues [16] examined stereotypes in verbal human-robot interaction. For this purpose, they manipulated not only the voice and name of the robot in a stereotypical way but also the wording and communication style. In the analysis of data from 38 participants who had interacted with a NAO robot, they detected a strong effect of the implicit gender that manifests, e.g., in stereotypical personality traits in specific cases. As one example of an individual-related cue, a naming task was used by Ladwig and Ferstl in an online survey among 40 participants to examine users’ stereotyping of robots. With regard to the implicit attribution of gender to social robots, they found that names are useful "as a measure of implicit gendering of robots" [13].

Studies on the effects of such gender cues also show why a gendered perception of robots becomes relevant in a sensitive field like eldercare. Beside the ascription of stereotypically male or female traits [20], a robot's perceived gender also influences the judgement of its competences [19]. Especially the assumed suitability of a robot for a specific task seems to be closely linked to the perception of its gender. For example, a number of studies show that a ‘male’ robot is evaluated as useful for stereotypical male tasks like repairing technical equipment or security activities, while a ‘female’ robot is deemed appropriate for stereotypical female tasks like household and care services [12, 19]. Furthermore, a qualitative study with a comparatively small sample of six participants indicates that older people evaluate nursing as a female task when deciding on a robotic appearance [26].

Based on such results, Kuchenbrandt and colleagues argue that social roles and attributions concerning gender-connotated tasks need to be considered in the design of robotic systems, and that especially ‘traditionally female’ professions such as nursing care should be examined in further research [27]. This becomes even more important as many robots tested or actually used in the care sector were not originally and specifically designed for care purposes but, for example, as industrial robots like Kuka’s LBR robotic arms, or as companion robots in general like SoftBank’s Pepper. In general, the existing research on gender stereotypes in human-robot interaction and their influence and efficacy has rather far-reaching practical implications for technology development and implementation. There are even considerations to make use of such stereotypes in the development, design, and implementation of social and care robots in order to improve human-robot interaction. Thus, the implementation of gender cues in robots is discussed as a way of increasing user acceptance and reaching a better user experience [21]. For example, Wang and colleagues found that, despite having no specific technical function, stereotypical accessories like aprons are used to improve the acceptance of robots conducting stereotypical female tasks such as cleaning or clearing tables [28].

Against this backdrop, the broader social implications and consequences of the use of gender cues and stereotypes in technological development and implementation deserve closer examination. So far, such implications and consequences have primarily been highlighted from the perspective of feminist and queer theory, reminding us of the social dimension and “social shaping” of technology [29]. In this context, ‘queer theory’ can be understood as an umbrella term for theories that adopt a critical perspective on existing social categories, in particular gender, and their normalizing tendencies (see e.g. [30,31,32]). This way, these theories aim to ‘queer’, that is, challenge and deconstruct the use of such categories. There is a wide range of corresponding perspectives on gender stereotyping in technological contexts. While some assign a liberating potential to properly designed and applied technology, others warn against the continuation of patriarchy by other means. One example is the role of the “cyborg”–a cybernetic organism that combines biological and technological elements-in the work of Donna Haraway [33]. On the one hand, it may be seen as a pathway towards liberation because it has no fixed biological sex and can thus blur the lines of gender and reveal its socially constructed character [31 p. 456]. On the other hand, the female cyborg is sometimes understood as a symbol for “male technological aggression against women” [31 p. 452] which renews and reinforces patriarchal ideas about women. In a similar vein, several commentators challenge the uncritical reproduction of problematic binary models of gender (an issue that also becomes relevant regarding the aforementioned social-psychological studies as they are mostly based on a simple binary approach to gender). In this sense, Wang and Young point out the problematic ambivalence of gender references in technology development. They may appear necessary to address specific gender-related needs and properties. Yet, they always run the risk "of forming overly-simplistic categories and representations" reproducing potentially harmful stereotypes [34]. While there are increasing calls for more gender sensitivity in the field of human-robot interaction, the normative basis and implications of these debates have not found systematic ethical consideration so far.

3 Ethical Analysis of Gender Stereotyping of Robotic Systems in Eldercare

From an ethical point of view, eldercare is a particularly sensitive field of application for gender-stereotyping in robotics. More or less frail and dependent older people arguably constitute a particularly vulnerable population whose protection calls for special safeguards and qualifications [35]. Furthermore, the field of care and especially eldercare is traditionally viewed as a primarily female domain and is therefore replete with often problematic stereotypical gender role expectations and asymmetrical power structures [36]. However, while there has been a virtual surge of ethical considerations of robots in care for older people in recent years (for overviews see [37, 38]), the concrete issue of gender stereotyping of robotic systems in eldercare has not received comprehensive ethical consideration so far.

In order to provide a first systematic exploration of the ethical aspects and problems associated with stereotyping of robotic systems in eldercare, we start from a so-called principlist framework. The principlist approach was originally formulated in the field of professional medical ethics [39]. It defines a set of prima facie middle-range ethical principles that cover common moral intuitions as well as major traditions and aspects of modern moral philosophical thought, such as deontological, consequentialist, and virtue ethical perspectives and aspects. In its original formulation, the principlist framework comprises the four principles of (respect for) patient autonomy, non-maleficence and beneficence (often subsumed under the principle of care), and justice [39]. In recent years, the approach has been increasingly expanded to other professional areas, as well, not least to nursing care [40].

In consequence, principlism is currently also frequently referenced in the ethical evaluation of assistive technologies in eldercare. Indeed, many analyses of ethical problems in technology assisted eldercare directly apply the principles of autonomy, care (non-maleficence and beneficence), and justice [41,42,43]. Some concentrate on only one or two central ethical principles, with a special focus on autonomy [44, 45]. Others start from the principlist framework and expand or differentiate the set of principles in order to adapt it to the specific technological context, for example, highlighting aspects of privacy, safety, or self-conception [46,47,48]. All in all, the principlist approach thus usually provides the normative foundation and starting point of ethical analysis and evaluation in technology assisted eldercare, the basic common denominator, so to speak [49]. Therefore, it appears appropriate to employ the principles of autonomy, care (nonmaleficence and beneficence), and justice as a first starting point and heuristic framework for detecting and analysing potential moral problems and conflicts of stereotypes and stereotyping in robotic systems for eldercare.

3.1 Autonomy

The principle of autonomy demands respect for the patient’s right to self-determination in different areas and respects, e.g., regarding everyday life activities, healthcare decisions, or dealing with personal information [39]. Depending on the underlying conception of self-determination, this can comprise considerably more than simply a maximum scope of options and individual freedom of choice. For example, a more comprehensive concept of autonomy would require the ability to make well-considered decisions in accordance with one’s own biography, value system, or significant social relationships [46].

In our context, the aspect of autonomy concerns first and foremost the question of whether the introduction of stereotypical robots in care settings of older people rests on the latter’s voluntary and informed decisions or represents a subliminal manipulation by others. Does stereotyping reflect the users’ own authentic will and deliberate choices or rather the strategic commitments of technology developers or caretakers instrumentalizing stereotypes for their own purposes? This question becomes particularly critical when older users with cognitive impairments are involved, e.g., people with dementia, as they have to be considered more vulnerable to deception and manipulation [50]. In addition, the decision for or against robotic stereotyping may also concern other persons living in the household and thus become a matter of relational autonomy and family decision making [51]. Here, conflicts between the preferences and decisions of patients and their close social environment can arise.

Furthermore, the problem of autonomy also pertains to the continued use of stereotyped robots in everyday life and care practices. In the long run, their operation might lead to a subliminal manipulation of user behavior, e.g., by reinforcing sexist communication styles and stereotypical personal activities such as handyman work for men or sewing for women. In this regard, the problem of stereotyping in robotic care touches upon the ethical debate on nudging, that is, the subtle use of positive reinforcement and indirect suggestions to influence individuals’ choices and behavior in desirable ways without seriously compromising their personal autonomy [52]. A further aspect of autonomy that becomes pressing in the everyday use of stereotyping in robotic systems is informational self-determination. In order to function, such systems usually depend on the continuous collection and analysis of large amounts of personal data, e.g., on users’ movements and behavior patterns. Hence, it is also relevant to assess whether the implementation of stereotypes or gender attributions in care robots requires the additional collection of sensitive user data, for example, about personal views, preferences, and lifestyle with regard to gender aspects, such as dressing, personal hobbies, or sexual inclinations and practices. In this case, the informational prerequisites of gender stereotyping could conflict with the users’ right to privacy. At the same time, the underlying distinction between private and public is itself traditionally entangled with assumptions about gender roles that frame eldercare as a female task located in the private sphere [53]. Thus, the stereotypical creation of a particular female look and feel that instills trust and suggests intimacy could serve the manipulative concealment of a systematic invasion of privacy. This could be all the more problematic as it might animate the user to feel at ease and reveal even more of him- or herself than otherwise.

Eventually, the constant repetition of such everyday life effects of stereotyping in robotic systems for eldercare might even influence the users’ personality itself by generating, promoting, or reinforcing biased perspectives and prejudiced attitudes. In the end, this could undermine their fundamental capability or willingness to critically reflect and overcome gender stereotypes and consequently cultivate a sexist and narrow-minded character. Such a consequence would ultimately cut to the very core of the ideal of autonomy as it was originally devised in the philosophy of the Enlightenment: the vision of a mature, self-reflective individual emancipated from traditional prejudice and unexamined preconceptions and committed to modern egalitarian standards of equality and universal human rights. By contrast, in cases where there is a mismatch between the user’s own perceptions of gender roles and the stereotyping strategies implemented in the robotic care system, the latter could also come to express a latent disrespect towards the respective person, e.g., by permanently promoting or even enforcing conflicting gendered perspectives and practices. In this regard, the stereotyping of robotic systems in eldercare might also conflict with the idea of respect for autonomy by constantly manifesting disregard and degradation vis-a-vis some of the core values and norms that are constitutive to the user’s personal identity and moral viewpoint, thus ultimately producing a defensive or even self-deprecating mindset and attitude.

3.2 Care

The principle of care originally refers to the professional responsibility to respect, protect, and promote the wellbeing of patients. This comprises the duty not to inflict any intentional harm (non-maleficence) as well as the obligation to promote their wellbeing (beneficence) [39]. The principle of care thus implies a recognition of and concern for the needs, interests, and vulnerabilities of others, especially if they are not able to take care of these aspects of their lives themselves [46].

In our context, the principle of care accordingly refers to the consequences of gender stereotypes in robotic systems for the older users’ bodily, psychological, and social wellbeing and quality of life. Thus, the question is whether and to what extent stereotyping leads to a robotically induced increase in personal satisfaction, fulfilment, and social orientation or is rather detrimental to the wellbeing and flourishing of users. For example, the implementation of gender stereotypes may improve users’ comfort and compliance with care robots and thus raise the overall effectiveness of nursing care as well as the quality of its outcomes. However, such stereotypes may also induce or amplify discomfort, especially if there is a mismatch between the user’s own perception of gender and the stereotypical features of the robot. As a consequence, there might be negative impacts on the care process and its results.

Furthermore, stereotyping may also have implications for the safety and security of robotic systems. This concerns users’ protection against different kinds of potential harm and damage that may result from proneness to error, malfunction, or misuse. Thus, it is important to determine to what extent stereotyping strategies in care robots could compromise their regular functioning and make them susceptible to malfunction, misoperation, abuse, or safety risks. For example, the implementation of certain psychologically effective gender cues in the design of robots may be in conflict with the requirements of their technical functioning. This could affect their functionality or operating security in a negative way, e.g., by adding technically superfluous mock-up props such as artificial ‘muscles’ or a particular ‘haircut’ that are inoperable or simply dysfunctional and therefore impair the system’s overall functioning or even provoke malfunction or breakdown. As already pointed out, another important aspect of safety concerns matters of data protection because the effective implementation of stereotyping strategies in the context of robotic systems might require the additional collection of particularly sensitive and intimate personal data regarding relevant gender aspects that might be prone to abuse and thus call for special safeguards. Above that, the ease and acceptance created by the implementation of suggestive stereotypical gender features may create a false subjective sense of safety among users and caregivers, a feeling of being ‘in good hands’ that could lead them to rely too heavily on the robotic system and consequently neglect their own personal alertness, vigilance, and individual responsibility.

Eventually, the long-term influence of the regular everyday implementation of stereotyped robots on the users’ fundamental preference structure itself as well as on the practice and culture of eldercare as such must be taken into consideration. For example, gender stereotypes in social or care robots may permit, induce, or encourage the development of emotional bonding and could therefore lead to romantic or even sexual attraction and relationships which might have problematic consequences, e.g., if the robot is replaced, withheld, or interferes with pre-existing human relations [54]. Indeed, the effects of robotically induced or intensified gender stereotypes on the wellbeing of partners, caretakers, and other persons in the care setting, as well as on the family as a whole also have to be taken into consideration. After all, the implementation and daily use of stereotypical robots may induce or increase chauvinist and sexist attitudes, or even transgressive or intrusive behavior on the part of the user, thus creating difficulties and discomfort for other caretakers or household members. Indeed, on a more general level, stereotyping of robotic systems poses the question of how stereotypical robots might play into the intricate web of cultural ideas, roles, and practices of care. Even if we accept the strong gender dynamic in this field, there may be the possibility that the implementation of gendered care robots modifies or even undermines the specific quality of close and intimate informal or formal care relations by replacing constitutive features such as attentiveness, receptiveness, and empathy by mere technological mimicry of service robotics [55].

3.3 Justice

The principle of justice originally refers to the claim of equal professional care and treatment for all patients, regardless of any medically irrelevant personal aspects and features [39]. In particular, this implies the impermissibility of any discrimination due to gender, age, religion, sexual orientation, ethnic background, etc. Especially in the context of assistive technologies for eldercare, however, the organizational and systemic dimension of social justice also has to be taken into account [46].

In this respect, one central question regarding stereotyping in robotic care refers to distributive justice and the problem of equal access to (and allocation of) stereotypical or non-stereotypical robots and the ensuing benefits or disadvantages for users. If, for example, stereotyping was associated with increased compliance and significantly better quality of care, access to stereotyped robots should arguably not be a matter of morally irrelevant or impertinent factors, such as, e.g., socio-economic standing or insurance status. On the other hand, justice may also demand the provision of equivalent alternatives for those who do not want to make use of stereotyped robots.

Furthermore, care robots can also promote participation, that is, individuals’ access to and share in valuable social activities, e.g., of social interaction and co-operation, economic creation of value, civil engagement, or public deliberation and political will-formation. As far as the implementation of stereotyping strategies in the design or programming of eldercare robots is concerned, the question is how gender stereotypes affect the possibilities of participation and inclusion of older men and women in different areas of social life. Thus, it would appear rather objectionable if certain assistive functions were neglected or not promoted because they were connected to corresponding social practices that are considered as not gender-relevant or gender-inappropriate, e.g., needlework for older men or technical activities for older women. In particular, it would be highly problematic if stereotyping strategies in care robots for older people would simply adopt and reproduce norms that lead to a gender participation gap [56] and thus either systematically inhibit or impede the social and political participation of older women or consequently favor or even enforce the social or political participation of older men. Especially options for political participation should not be coupled to gender assignments but rather should be left to the individual citizen’s own preferences and self-determined decisions. For example, a social robot that tended to direct female users to the coffee table and knitting doll, and male users to the newspaper stand and voting box would effectively undermine the fundamental idea of equal citizen rights.

Finally, yet importantly, justice ultimately touches upon fundamental issues of (human) dignity and equal rights. These aspects are usually the focus of many public and academic debates on the morally or even legally problematic aspects of gender stereotypes in different domains [57]. The question is how the implementation of stereotyping strategies in robotic eldercare would play into this highly sensitive dimension of self-respect, basic rights, and social recognition: What gendered standards of acceptability, normality, or excellence of performance would stereotyped robots bring to bear in interactions with the older users? For example, would they rate the relevance of support in different domains of daily life and social activities such as intellectual activities, household chores, or hobbies and sports, according to different gender standards? And how would they interfere with the influence of ageing, functional decline, and dependence on users’ gendered self-perceptions as a man or a woman? One possibility could be that they function as allies that support the execution of gendered activities and thus conform to the corresponding expectations. However, they could also be perceived as competitors that fulfil the respective gendered tasks and performances better than the users themselves and thus reinforce their sense of inferiority and insufficiency. These considerations already point to the fundamental concern that the very idea of stereotyping in and of itself contravenes basic egalitarian principles of justice in modern moral thought that call for equal respect and mutual recognition of all individuals. Thus, one could argue that stereotyping of care and social robots implies the affirmation and reinforcement of traditional gender stereotypes and therefore amounts to a technological promotion of existing societal bias, injustice, and discrimination. Especially in late-modern pluralistic societies, the question is whether and to what extent the scope of stereotypical social robotics can really adequately match the diversity of gender identities of older people and the accompanying user perceptions, needs, and preferences, e.g., when non-binary gender identities come into play (for the general problem see [58]).

4 Possible Solutions: Explanation, Neutralization and Queering

Attributions of social categories such as gender in robotic care can pose ethical problems if they contradict fundamental ethical principles like respect for autonomy, care for patients’ or users’ wellbeing, or the demands of justice. Based on the literature, three possible solutions to these problems can be identified: (a) the explanation of robotic functions to dispel gender perspectives, (b) the neutralization of gender attributions, or (c) the queering of the attributions. All three options are based on different moral assumptions and technological approaches regarding the issue of gender perspectives on robotics.

The idea of explaining the robot’s functions to avoid any human and gender associations, as for example suggested by Dufour and Ehrwein Nihan [59], can be found in the development of Fraunhofer’s care robot Care-O-Bot, which is not described as anthropomorphic but as technomorphic to explain its function and capacity as a machine [60, 61]. Neutralization as a method to handle gender stereotyping can be found in several care robots that were deliberately developed and designed to be genderless. To achieve this, they are often given a childlike appearance. For example, the care robot Matilda, used in retirement homes, has a baby voice, big eyes, and a small chin to create a genderless, baby-like look [62]. Another example for a supposedly gender-neutral robot is NAO which is explicitly characterized as “child-like and genderless” by the developers [63, 64]. As of now, there are no explicitly gender fluid robots that could serve as a model for what we suggest here as queering a robot. However, there is the example of the companion robot Pepper which the developers refer to with the male pronoun he although its appearance rather suggests a female gender [65]. Furthermore, there are robots like Amy/Andrew or Relay whose design can be customized with male or female gender cues, for example with regard to colouring or voice, that can also be mixed. In the subsequent sections, we offer an exploratory ethical analysis of these three strategies. Based on the principlist framework, we analyse whether (and to what extent) explanation, neutralization, and queering constitute options to avoid gender stereotyping or rather create new challenges.

4.1 Explanation

The strategy of explanation pursues a classical enlightenment approach. It suggests that if the function and task of the robot is adequately explained to the users, they will learn to see it for what it really is: a machine without any gender or other human features. They will no longer project anthropomorphic characteristics onto the robot to define it as a quasi-human companion but see it as a purely functional technical device. Indeed, some argue that adequate knowledge about the technical characteristics and the presumed suitability for the intended task can reduce stereotypical judgement effects [59].

In relation to the principle of autonomy, this approach appears promising since it aims to promote a well-informed and thus self-determined user decision. The users are presented with the objective facts and technical information about the robot, for example, the extent of personal data it collects. They are informed that they are in fact dealing with a machine and not with a human being and are thus enabled to treat it in the same way they would treat any other technical device. However, two problems pose challenges to this approach. Firstly, people usually do not treat machines, even obvious ones like cars, vacuum cleaners, or computers, just as inanimate objects [66]. They give them pet names, talk to them, and get emotionally attached to them despite being well aware that a car is only a machine and does not have attributes of a human being, such as feelings or thoughts. Hence, the psychological tendency to anthropomorphize technology seems to be rather strong and pervasive so that it is questionable whether it is possible to simply suspend it by theoretical explanations. Secondly, the question must be raised who decides what information is relevant and sufficient to explain the robot, and how it should be distributed to the user. Finally, the authority that explains the robot and conveys the allegedly neutral information assumes a position of power. This creates a danger of manipulation of the users. The person who explains the robot can decide to share the information they feel is important about the robot and withhold other information. The users are in a subordinate position because they have to trust that the person providing the information does not manipulate their perception in the way they explain the function of the robot. This creates a power imbalance between users and ‘explainers’ and disadvantages especially the already vulnerable group of older people as recipients of information [14].

Regarding the principle of care that is aimed at ensuring wellbeing and avoiding harm to users, the situation can be viewed in different ways. If users receive an explanation of the use and function of the robot in a clear and comprehensible manner, this can contribute to their wellbeing and quality of life. The users are aware of the possibilities but also limitations of the machine. They are aware that the care robot is indeed not a human being with a personality but a machine. Hence no disappointment arises if a robot does not fulfil the expectations one would have vis-a-vis a human carer. If the options a care robot can offer are clear and the users are able to understand them, this can indeed support their wellbeing and certainly help to avoid harm once the mode of operation is really understood. Yet, the anthropomorphization of the robot does not necessarily always have to create harm but might also increase wellbeing. In fact, this possibility is explicitly used in technology development to support a robot’s performance in its setting. For example, Matilda is designed as child-like in order not to appear threatening to older people with cognitive impairments [62]. In consequence, explaining that such a robot is only a machine could have detrimental effects on its utilization and functioning and the wellbeing of the users. Furthermore, it is possible that users have already formed an emotional attachment to a robot before it is explained to them, or an emotional attachment is crucial for the users to feel comfortable and to interact with the robot in an effective way. If the robot is ‘de-humanized’ by an explanation, this might ultimately affect these users’ wellbeing because they do not want to use ‘just’ a machine but need an emotional bond to increase their wellbeing.

The principle of justice and equal treatment poses questions similar to those raised in the context of autonomy. The elucidation of the ‘objective’ features of the robot must take place in a way that the user can understand. To this end, the explaining authority must be aware of the diversity of people who are supposed to receive an explanation of the robot. Here factors like gender, class, occupation, and ethnicity must be taken into account to find the appropriate language to talk about the robot. A former mechanical engineer needs a very different way of explaining a robot than a migrant worker whose first language is not the one of the ‘explainers’ and who may only have limited education. It is crucial to be aware of these differences and adapt the way the robot is explained to a diverse audience. Yet, the explanation of the robot should not take place in a way that reinforces stereotypes, for example, if men receive all the technical details and women are only explained the looks of the robot because the explaining instance assumes that women do not understand technical details. To ensure the just and equal treatment of all potential users, the educators must be aware of the approach’s challenges and limitations because it might ultimately not be possible to enlighten everyone about the fact that the robot is a machine, and this then can lead to manipulation, harm, and injustice.

In the end, the explanatory approach does not solve the issues of power in the context of stereotyping and attributions. It rests on a questionable model of public understanding of science that conceptualizes technology in terms of objective essential functions and sees sociocultural attributions as merely subjective projections, comparable to superstitious beliefs that can be dispelled by providing objective information. Accordingly, explanatory strategies do not acknowledge the inevitable cultural symbolism and social power structures that create gender attributions and can lead to stereotypes [32]. The individual and sociocultural attributions do not just kick in when the robot is brought to the user but are already infused in the process of development and design. Technology developers and designers are by no means neutral and also impute individual and sociocultural attributions to the robots they are developing. Eventually, the possibility of explanation already appears questionable at a fundamental level as it is dubious whether an objective explanation of a robot is even possible or whether robots are always inevitably charged with sociocultural implications and attributions that are open to interpretation. This is particularly challenging in a sensitive area like care robotics where the needs of vulnerable individuals are concerned. Especially if these gender attributions are - occasionally implicitly and unintentionally - introduced by the same technology developers and designers who then convey the explanation of the robot, this can create a serious power imbalance. Hence, the method of explanation might in the end create more problems of autonomy and justice than it actually solves.

4.2 Neutralization

The second option is neutralization. Its aim is to create a care robot that is gender neutral and does not have any gender cues (e.g. [12]). Thus, one could argue that such a robot would even help to avoid stereotypes that are pervasive in human interaction. It could be designed to look indeterminate and have features that can neither be interpreted as male nor as female. It could speak with a neutral voice and act in a gender-neutral manner. For example, the Pepper robot has been created with the notion that due to its appearance and range of voices it can be perceived as neutral [67]. The idea of neutralization appears appealing as it seems to offer an engineering solution, a technical fix for issues of stereotyping and possible discrimination through the systematic erasure of all gender markers. Such a neutral robot might not only solve problems regarding gender stereotypes but also intercultural conflicts of understanding gender attributes and other social categories, for example regarding race, as well as preventing possible robo-sexism in the context of sexist embodiment of a robot [12, 68, 69].

However, neutralization could also create problems with regard to the autonomy of the users. Many societal discourses are gendered and often split in a binary manner which relates the male and the female [31]. Technology or robots are not excluded from this because they are not excluded from societal discourse. This raises doubts about the feasibility of neutralization. Most people actually identify in a binary way as either male or female. This is an important aspect of their identity, even if someone identifies as non-binary. To eliminate this central identity marker from the interaction with the robot might restrict the self-determination of the users and in this way endanger their autonomy. For them it may be essential in interactions to know whether an entity can be identified as male or female in order to act accordingly in the context of the pertinent social norms they are used to. The elimination of gender cues might lead to confusion and decrease the users’ possibilities of autonomous decision-making regarding the use of the robot since it does not fit their own social norms anymore. Since gender is such an important and at the same time controversial category in Western societies, neutralization strategies could thus eventually create controversy and threats to autonomy.

The aspect of wellbeing and quality of life is crucial in any care context. The aim is to improve the wellbeing of the users and increase their quality of life. This not only requires that a robot performs certain mechanical tasks but also that it is integrated into their everyday lives. In this regard, the neutralization of gender from robot design might cause alienation, especially in a social robot. As a result, the robot might not be experienced as something comfortable or even pleasurable the users enjoy but as a strange machine which obtrudes their private space and life world. Especially for older users with cognitive impairment, social categories like gender or race can be crucial to make them feel comfortable around the robot and increase their ability to interact with it and benefit from its utilization, even if these categories are applied in a stereotypical way [70, 71]. Thus, although neutralization is frequently perceived as a promising technological strategy to prevent harm and ensure justice and equality in the development of robots, the elimination of any gender cues and markers might ultimately prove detrimental and effectively rather cause harm and decrease the wellbeing and quality of life of certain user groups.

Ultimately, it is questionable whether the creation of a gender- and otherwise neutral robot would be feasible at all. Even if technology developers design robots with the explicit intention to make them gender neutral, studies indicate that users still attribute a gender to them [21, 67]. This suggests that even robots that were created as neutral could not prevent subjective interpretations regarding gender. Apparently, the attribution of social categories is far more complex than suggested in some areas of technology development. It is not just a matter of deliberate design decisions and intentionally placed cues and markers. The developers and designers are not in complete control of a robot’s features in the sense that they can simply determine their perception and interpretation by others. In addition, social categories like gender are not static but rather fluid constructions that change over time and across different observers, standpoints, and contexts. To create a gender-neutral robot, it would be necessary to be aware of all possible cues in order to avoid their implementation in the technology. Their fluidity and variability not only over time but also between different contexts makes it at least unlikely and maybe completely impossible to develop a universally gender-neutral robot that is actually experienced as neutral by all users.

4.3 Queering

A third option could be the queering of the robot. Inspired by feminist and queer theorists such as Judith Butler [32] and Jack Halberstam [31], queering aims to deconstruct normalizing conceptions of identity. This strategy comprises a flexible and subversive (re-)combination and implementation of social categories. It suggests a non-binary and fluid gender attribution to robots that challenges common stereotypes without neglecting individual user preferences. Thus, queering might be able to acknowledge the inevitable relevance of gender aspects in human-robot interaction while at the same time challenging their conventional and normalizing application and promotion. This way, it could support an inclusive approach that also considers discriminated and marginalized groups [72]. The idea of queering robots takes some inspiration from what has been called a cyborg, a cybernetic organism, a hybrid of machine and organism, which is part of a post-gender world beyond a binary gender order as a norm [33]. Taking up these considerations in the context of eldercare and thus “graying the cyborg” [73], one could imagine a care robot that challenges previous notions of robots, eldercare, the cared-for person, and the carer. A queered care robot may resemble Haraway’s cyborg in that it cannot be understood in binary terms of gender [74]. It would be neither female nor male but could adapt its gender markers to the preferences of the users. As such it would not rise above gender, but permit fluidity that incorporates male and female, as well as everything in between.

At first sight, the queering of care robots may seem like a viable solution due to its flexibility in adapting to the needs and wishes of the individual user. Thus, in the context of the principle of autonomy, the queered robot seems to respect self-determination. Users can pick a care robot that fits their own needs and preferences. It can have the gender characteristics that suit them best. Yet, this leaves the question what happens when more than one person uses the robot, e.g., in a care home. Frequently, the user is not only the cared-for person but also a human carer. For them, a robot fitted to the needs and wishes of the cared-for person might not represent their own perspective. The cared-for person might actually wish for a type of robot that creates feelings of disrespect and discrimination in other users. In this case, self-determination is not guaranteed for all parties involved. With regard to autonomy, queering only seems to offer a solution where the robot is merely used by one person, or all users have common preferences about the design. Otherwise, conflicts can arise from the individualization of the design. This raises the question of whose preferences should be prioritized. One could argue that the robot should primarily help to care for the cared-for persons so that their wishes should have priority. However, these wishes might conflict with the aim of the care robot so that the care task cannot be executed as intended. The respective wishes might also be discriminating against other members of the care setting and thus collide with the principle of justice. A second issue in the context of autonomy is the question of how gender fluidity can be implemented in a robot. Two possibilities are conceivable. Firstly, the users become involved in the design and production process and their wishes are incorporated already at that stage. This would pose a disadvantage when the users’ preferences change over the course of use [75]. Secondly, the robot could learn the preferences of the users, for example by means of machine learning strategies and adapt accordingly. This would probably require the extensive collection of possibly very personal user data. Here, questions of privacy and data protection come to the fore, and it must be discussed how the data is stored and who has access to the information so that no harm for the user will result. Additionally, both options would involve extensive participation on the users' part as well as technological advances that are not yet on the horizon and might be rather pricy for a wider application.

When it comes to the principle of care, the queered robot also appears to offer a suitable solution for individual users. They can pick the robot that fits their needs and will enhance their wellbeing and quality of life. The aim of queering is not necessarily to erase any gender binaries but rather to acknowledge them and reflect upon their possible effects on the care process. Again, however, this might create challenges and conflicts between the different user groups. For example, from a professional care perspective, the cared-for person may not understand or know what they need at a given point. This could be due to cognitive impairment or a clash of expectations regarding the tasks of the care robot. This once more raises the question of who decides about the construction of the robot and the relevant care needs whose satisfaction increases quality of life. Additionally, the users’ preferences might pose safety risks as they can create a conflict with technical functions. For example, if a user wishes that the robot has very long hair because this is a feature that makes them comfortable, the long hair might interfere with the robot’s functionality. Then the user’s specific choice might indeed create a safety risk and influence his or her wellbeing in unanticipated ways.

However, the queered care robot can score in the field of justice and equality. The fact that it goes beyond the gender binary as a cultural condition that leads to intended and unintended discrimination meets a core demand of feminist theorists [32, 76, 77]. While this does not necessarily serve the wishes or needs of the individual users of the robot, it can contribute to the avoidance of discrimination and the promotion of equality in society. The question is how unjust and discriminatory needs and wishes of the individual can be weighed against the overall societal responsibility to avoid and decrease discrimination and inequality. The desire to overcome the often harmful binary construction of gender in society might be considered as more important here than to ensure the fulfilment of individual wishes which increase the discrimination and inequality of others. On the other hand, the purpose of the robot must be critically assessed. Is the aim to improve the care of older people and increase their quality of life? Or should it also contribute to societal progress regarding the elimination of inequality and injustice? Even if we concede that the former has priority, the queered robot creates a dilemma by raising the issue of whose needs and are relevant or more important in comparison to others, but also to societal requirements and moral norms. As appealing as queering strategies may appear at first sight in promoting autonomy, wellbeing, and justice, they raise other, even more fundamental ethical issues.

Thus, although all three options - explanation, neutralization, and queering - have their problems and shortcomings when analysed with the help of the principlist approach, the analysis makes clear that the creation of a care robot in relation to social categories, such as gender, age, race, or others, calls for an interdisciplinary approach to avoid discrimination by and of users and other parties involved. The approach of queering still shows the most promising potential when it comes to reflecting and challenging gender stereotypes (and possibly also other social categories) and their effects in care robotics.

5 Conclusions and Outlook

Gender is a relevant category in human-robot interaction. It must be considered a serious and complex issue in robotics, especially as the ethical implications and social consequences of gender stereotypes in the development and design of care robots are largely neglected in current research. The heuristic use of the principlist framework reveals a whole variety of potential moral problems and conflicts regarding gender stereotypes in care robotics. Their systematic consideration can help to develop a more nuanced understanding of the sociopsychological dynamic of human-robot interaction and increase both the social acceptance and moral acceptability of robots in eldercare.

At a general level, many of the moral problems and conflicts addressed in our analysis may not be unique to robotic care. For example, issues of autonomy, biased perceptions, and stereotypical interactions can obviously also arise in the context of human nursing care [78]. As we have pointed out, the use of robots may actually even have the potential to alleviate or circumvent some of the pertinent pitfalls and problems, e.g., the sociopsychological mechanisms of stereotype threat that can impede the performance of male nurses in a traditionally female profession [79]. Yet, from a moral philosophical point of view, it makes an important difference whether stereotypes are simply passed on unknowingly and involuntarily in human interaction or rather deliberately introduced or even strategically employed in the development of technological devices. In this vein, it also becomes clear that it is ethically relevant how exactly gender stereotypes find their way into a robot. At the very least, we have to distinguish between bigotry and stereotyping. The former describes the inconsiderate infiltration of robot development and design with common gender stereotypes. By contrast, the latter refers to the intentional and strategical use of gender stereotypes in the context of robotics.

Of course, further sociopsychological, technological, and ethical research on the meaning and functioning of gender stereotypes in the context of social robotics is necessary in order to confirm and refine our analyses [80]. In particular, it is important to investigate the concrete influence of stereotyping on the beliefs and attitudes of older people. Stereotyping strategies usually build on psychological assumptions about the mental rigidity or malleability of their target group that call for closer empirical examination. For example, it is important to know whether older people actually have certain fixed stereotypical notions regarding gender roles and to what extent these can be reinforced or modified by stereotypical robots. Furthermore, technological research is necessary in order to assess the concrete technological preconditions and limitations of stereotyping strategies. For example, the probability of safety and security problems due to stereotyping depends on the range of technological possibilities for implementing gender stereotypes in concrete robotic devices. Finally, the limitations of the medical ethical framework of principlism in the field of care robotics also need critical reflection [49]. Thus, further moral philosophical theories and perspectives must be considered in future analyses, for example eudemonistic questions regarding the preconditions of human flourishing and the values involved in leading a good life that are particularly important in the context of eldercare.

The need for further research also applies to possible solutions to the problems of stereotyping in robotic care. In this paper, we primarily discussed three types of strategies: explanation, neutralization, and queering. Explanation and neutralization have significant problems regarding their theoretical premises and social adaption. By only considering technological possibilities to create a ‘transparent’ or gender-neutral robot, these approaches fail to consider important insights from social research and cultural studies regarding the fundamental mechanisms of the cultural construction of meaning and its entanglement with social power structures. By contrast, the queering of robots and the idea of a gender-fluid robot represent a new, innovative perspective that deserves closer examination and elaboration. Yet, we have also identified potential problems and shortcomings in this approach. Especially the issue of multi-user groups is significant here. The wishes and needs of one user might not be the wishes and needs of another. In extreme cases, this might even lead to further discrimination if one user demands a stereotypical robot that is perceived as discriminating by other users. Above that, the queering of gender cues might lead to problems since a binary gender order is deeply ingrained in Western societies. To queer this order might result in confusion and discomfort on the part of the users rather than increasing their wellbeing.

Of course, the general problems addressed here transcend the domain of eldercare and may also pertain to other fields of application of robotics such as customer service or industrial production. Furthermore, gender is not the only relevant social category when it comes to stereotyping. It has been indicated that race and other categories like class or disability intersect in these contexts [14]. Especially the issues of race and whiteness have received increasing attention in recent science and technology studies on robot design [14, 81]. This multidimensionality and intersectionality of social categories needs to be taken into account in further research so that categories are not viewed as mutually exclusive or hierarchical but intersecting. Here the notion that “the robotic revolution directly abuts up against traditional boundaries of the ethical landscape and perhaps even punches through” [37 p. 23] appears pertinent. The issue of stereotyping robotics illustrates the relevance of a diversity-sensitive perspective in roboethics and of a critical reflection of roboethical principles and methods regarding their implications and applicability. Thus, an interdisciplinary approach appears to be best suited to tackle the issues of (gender) stereotyping in robotic eldercare and to develop morally acceptable diversity-sensitive solutions.