Introduction

Since the introduction of home computers in 1977, rarely has a new technology divided the opinions of people in the same way as robots. Although continuously growing, the current market for personal domestic, and service robots (12.2 million units sold in 2018 worldwide), or “social” robots for entertainment (4.1 million units sold in 2018 worldwide) is small, especially in comparison to the deployment of industrial robots (154 million units sold in 2018 in China alone, 371.5 million units in the 15 biggest markets of the world, [1]). However, it is likely that particularly domestic and social robots will become increasingly prevalent [2, 3], until one day, robots at home will be as common as home computers are now. Meanwhile, a range of open questions emerges and requires discourse. For instance, researchers need to address the societal and ethical impact associated with the introduction of (social) robots into our everyday lives.

Ethics in general can be defined as principles that distinguish between behavior that helps and behavior that harms [4]. Roboethics is a research area underlying all ethical issues with regard to robots and robotic assistance. Roboethics or robot ethics incorporate “ethical questions about how humans should design, deploy, and treat robots” [5, p. 243]. More specifically, ethical robot behavior is—in this context—understood as “an agent’s behavior governing a system of acts that affects others (i.e., patients) according to moral rules” [6 , p. 483]. The importance of ethics in research on robots becomes even more obvious in light of the vast amount of literature on ethics in human-robot interaction (HRI): A literature search on google scholar offers 14.500 results for “ethics and human-robot interaction”, searching for terms like “ethics and robots”, or “ethics and robotics” leads to over 150.000 and over 136.000 results, respectively. The topic of ethics in (social) robotics has been discussed in the literature for decades (e.g., [7,8,9,10,11]). The current work will, however, only provide a glimpse into the most recent issues and debates. We do so by focusing on the last five years of research on ethical and societal issues in the field of HRI.

Areas of Robot Use, and General Ethical Challenges Associated with Robot Deployment

Robots are deployed in various fields of use in which they offer context-specific benefits and challenges. For instance, in industrial settings, robots can increase productivity and relieve workers from completing physically challenging tasks. The automotive industry is a context in which robots have already been used for years [12] to relieve the burden of workers, and to increase productivity and flexibility (e.g., [13]). Furthermore, robots play a role in the military context (e.g., [14]) to reduce the number of human soldiers required for a mission and to minimize the number of casualties. Similarly, robots are used for search-and-rescue tasks in terrains that are either dangerous or inaccessible for human rescue teams (e.g., [15]). Robots can be utilized for sexual pleasure, enabling sexuality without risk of sexually transmitted diseases and unwanted pregnancies, and potentially reducing sex-work related problems, such as sex trafficking [16]. Robots are used in the care sector (e.g., [17]) and in rehabilitation (e.g., [18]) to relieve the burden of care personnel (e.g., [19]). Robots are also beneficial as members in human-robot teams that collaborate in the medical setting, e.g., during surgeries [20]. Finally, robots serve humans as assistants and companions in the home environment (e.g., [21]).

Clearly, apart from such potential benefits, the successful integration of robots in society also introduces several challenges. According to Fosch-Villaronga and colleagues [22••], potential ethical challenges are superordinated by two so-called “meta-challenges”: Uncertainty and responsibility. First, the meta-challenge “uncertainty” refers to user uncertainty concerning laws and regulations of robot use. Uncertainty represents a meta-challenge because many potential legal and societal issues concerning robot use are either still unknown or have not yet been regulated by laws or rules. Second, the meta-challenge “responsibility” refers to the difficulties associated with the open issue of who regulates or holds responsibility when humans interact with robots. This concerns the regulation of robot use, responsibility for damage caused by a robot, for the correct disposal of robots, etc. According to Fosch-Villaronga and colleagues [22••], these two meta-challenges influence each and every ethical issue that is discussed in the literature.

General challenges associated with the deployment of robots vary in terms of their ethical relevance: Robot acceptance (e.g., [23]) and robot usability (e.g., [24, 25]) are deemed less ethically relevant issues compared with concrete fears of potential end users. One big fear of people with regard to robots revolves around job replacement (e.g., [26, 27]. Maurice et al. [28] offer a general discussion on the ethical issues related to robots and assistive technology at the workplace. Acemoğlu and Restrepo [26], as well as Dauth et al. [27] have mainly covered job replacement in the industry and the labor market, respectively. Despite the fact that robot technology is currently not advanced enough to replace human labor in sectors such as therapy and care some authors have already expressed concern about the future replacement of human caregivers [29, 30]. An additional fear is related to a “too much” of robotic assistance. Likewise, Gransche [31] indicated that excessive assistance by robots could make us either incapable or unwilling to fulfill even simple tasks, and, therefore, rendering humans helpless without robot support.

Besides reflecting concrete fears with regard to robots, there is a high number of potential ethical issues that are summarized under the umbrella term “ethical, legal and security issues” (ELS; e.g., [22••]). Research on ELS issues, especially in the last five years, examined the aspects law and liability, privacy and (data) security, consent, and autonomy, due to the connection between autonomy and security. The aspect of law and liability that is covered within the framework of ELS issues regards the question who is responsible, for example if the robot malfunctions (see [32] or [33] for discussions on the responsibility of machines). This topic often goes hand in hand with privacy and (data) security issues. Who gets access to what data? What threat does hacking pose? Do we know which data a robot is going to collect, and how do we consent? Are we even aware of the presence of a robot in a public space? Questions regarding the collection, storage, and usage of our data, which might be collected by robots around us, are often discussed in the context of Big Data (see [34,35,36,37,38] on Big Data and privacy with regard to assistance technology and robots). Additionally, the deployment of robots in public spaces is not only relevant for privacy and security issues, but also for consent. When humans and robots (need to) interact, it is crucial that there is the opportunity to consent to it or to deny consent (for a discussion on consent in HRI, we refer to [39]). Another important issue that is discussed widely in the context of ethics in robotics in general concerns robot autonomy. According to Bekey [40], autonomous robots are “intelligent machines capable of performing tasks in the world by themselves, without explicit human control over their movements” ([40], p. xiii). Robot autonomy is often addressed in light of Asimov’s “Laws of Robotics” [41], which to this day inspire researchers in HRI. According to Asimov [41], first, a robot must not harm human beings, or humanity, neither through action nor through inaction. Second, a robot must follow human orders, as long as the orders do not lead to harm of another human being, or humanity. Third, a robot must protect its own existence, as long as it does not lead to harm and does not disregard an order given by a human. The “Laws of Robotics” imply that a robot must act if a human/humanity is about to get harmed, even if there are no explicit orders by its user. Accordingly, a robot may refuse an order, if harm would be the consequence of that specific order. Some authors claim that humans must be responsible for the actions of machines, even if the machines act autonomously [42]. Relatedly, the notion of autonomy is also heavily discussed in the context of autonomous driving since autonomous vehicles make decisions that directly impact human safety, for instance, by Brändle and Grunwald [43], Grunwald [44], and Sparrow and Howard [45].

Even though based in science fiction literature, Asimov’s “Laws of Robotics” have inspired reflections on robot morality and autonomy (e.g., [46,47,48]). The more autonomous robots become, the more they are potentially capable of making their own decisions—which brings in the notion of robot morality. Malle [5] discusses machine morality, addressing questions concerning a robot’s moral capabilities and their technical implementation. However, regardless of the actual moral capacities and capabilities of robots, humans hold robots accountable for their actions to a certain degree [49]; some studies even show that humans apply the same moral norms to robots as to humans [50]. Banks [51] claims that independent from the robot’s actual level of autonomy and/or agency determining its moral capacity, the robot can be perceived as a moral agent. At the same time, Bigman and Gray [52] show that people are averse to machines that make morally relevant decisions. Other authors suggest that machines cannot be moral under any circumstance [53], or that they cannot be ethical agents at all [54]. For a general critical discussion on moral robots, consider Scheutz and Malle [55]. Those with a particular interest in robot heroism as a special case of robot morality may want to consult Wiltshire [56]. Apart from whether robots will ever engage in moral decision-making, research indicates that humans indeed perceive robots as potential moral agents [49,50,51]. This, however, has implications for robot users and the expectations they bring into human-robot interaction.

HRI-Specific Ethical Challenges

Complementary to general ethical issues that must be considered when introducing robots into human lives and human society, there are ethical issues that are specifically relevant to HRI. Among such HRI-related topics, discrimination of users and robots (e.g., [57,58,59]), dehumanization of users (e.g., [60, 61]), and deception by robots (e.g., [62,63,64,65]) are frequently discussed topics. Considering discrimination, scholars point to the issue that if robots are programmed by humans, they may fall prey to the same biases that are known to cause problems in human-human interaction (for as discussion of discrimination in AI see [66]). One example for such a bias is racial bias in the use of police force [57], meaning that a robot could fall victim to the same biases as human police officers when deciding on the usage of (deadly) force during a police operation. On top of that, robots may not only discriminate against humans through their behavior but they also might embody discrimination through their design, commonly featuring Euro-centric or overly feminized design [58]. Sparrow [67] even argues that due to the perception of robots as slaves a robot appearance resembling the ethnicity of groups formerly abused as slaves might be highly problematic.

Another ethical and social issue that is broadly reflected upon concerns the use of robots to alleviate the lack of social connection faced by some groups in society. The idea is that robots will ultimately replace human social relationships, resulting in dehumanization of the human (elderly) user by society [e.g., 60, 64, 68, 69]. However, a lack of social connection might not be unique to the elderly population. For example, Yamaguchi [70] reports about individuals who have married a virtual agent because of a lack of potential human relationship partners, or because of their lack of the social competence necessary to establish and maintain close human–human relationships. De Graaf [61] argues that this issue might aggravate, as she claims that in a society in which robots are a matter of course humans’ social skills and willingness to “deal with the complexity of real human relationships” might decrease [61 , p. 595]. However, when thinking about the relationship between humans and robots, there are more issues to consider than just the replacement of humans. Relationships between humans and robots might even be considered deceptive by their very nature, as they can only simulate a connection that resembles a human–human relationship. Exemplary questions in this context are as follows: Does a robot deceive us when simulating a connection resembling a human-human relationship? Is a robot allowed to lie? It can be argued that robot deception might be legitimate under some circumstances, for instance, when the goal is to make the user feel positive or comfortable [62]. Other researchers concur that robot deception is ethically problematic, no matter what [64, 65]. One topic that is especially relevant with regard to robotic deception in HRI is empathy, more specifically the evocation of empathy in the user. Coeckelbergh [71] argues that the recognition of the vulnerability of humans as embodied beings, and the fact that human beings recognize each other as equally vulnerable, is one necessary condition for empathy to emerge. He names that recognition of vulnerability mirroring and deems the notion of robots being vulnerable a necessary prerequisite for vulnerability mirroring. However, because robots cannot be vulnerable in the same sense as humans are, the idea of robot vulnerability may be associated with deception as well. Liberati and Nagataki [72] elaborately discuss the ethics of vulnerability in relationships between humans and robots, which includes empathy as well. With regard to the question of simulating a social connection between humans and robots, Coeckelbergh [63] suggests that robots can never be friends in an Aristotelian meaning, since they lack the mutuality and reciprocity necessary to form a friendship. Coeckelbergh [63] also argues that what is considered deception in some works is not necessarily deception, as the term deception in this context implies that robots create a virtual world that contrasts the “real” world, and that this is not necessarily the case. To evaluate robot deception in any given case, it might be necessary to consider whether the robot behavior counts as deception under the specific circumstance and, if so, if the deception is necessary or beneficial for the individual.

Apart from discrimination, dehumanization, and deception, which represent phenomena that are potentially relevant for all types of robots involved in HRI, some authors suggest that there are specific ethical issues related to socially assistive robots (SAR), in particular (e.g., [73]). They propose that these issues are unique to SAR due to their more social nature compared with other types of robots. SAR are defined as a class of robots between “assistive robotics (robots that provide assistance to a user) and socially interactive robotics (robots that communicate with a user through social and nonphysical interaction)” [74 , p. 25]. Wilson et al. [73] suggest the following ethical issues are particularly relevant for social robots: A respect for social norms, the robot being able to make decisions about competing obligations, building and maintaining trust between robot and user, the potential problem of social manipulation and deception by the robot, and the issue of blame and justification, especially if something goes wrong [73]. As the task of building and maintaining trust between robots and users is an important ethical factor in the contexts of socially assistive robots [73], there are trust-based approaches to ethical social robots. These emphasize the importance of building and maintaining trust, and the potential pitfalls of trust between user and robot. To illustrate, Koyama [75] presents a recent trust-based approach on the ethics of social robots. In addition to ethical issues specific to SAR, the discussion on ethics in HRI also features cyber-physical systems, which, in this context, are understood as intelligent robotics systems linked to the Internet of Things, that interact with the physical world [76]. Furthermore, for a classic overview over ethics in HRI, we recommend Lin et al. [77], and for a recent overview, we refer to Bartneck [78], respectively.

Moreover, there are two further areas in HRI in which ethics play a major role: Ethics in the conduct of HRI research, and ethics related to robot rights. HRI research and the field’s specific research methods bring along their very own ethical issues to consider. One of the most important issues in this context, which has previously been discussed in the context of relationships between humans and robots in general, is deception of the user. Deception is frequently used in research and is often deemed necessary, because a complete disclosure of all information regarding the experiment would highly influence participant reactions. In HRI research, deception is especially important as the Wizard-of-Oz approach is frequently used. Therefore, with regard to ethics, the possibility of deception through an improper use of a Wizard-of-Oz approach and the following potential for embarrassment of the participant are to be acknowledged by the researcher [79], as well as the “Turing Deceptions” [80, 81]. Because this article focuses on HRI research in general rather than on research methods in particular, we recommend Punchoojit and Hongwarittorrn [82] who cover the ethical issues that must be recognized when conducting HCI or HRI research.

Regarding robot rights, the ethical and societal issues discussed previously took a human-centered but not a robot-centered perspective. However, literature also addresses the ethical and societal issues that target robots. For instance, Loh [83] refers to the difference between robots as moral agents and moral patients, which can be applied to robots as ethical agents and ethical patients as well. Literature on this topic examines and discusses behavior towards robots, robot rights, and the question of whether ethics apply to robots at all. The topic of robot rights and behavior towards robots is vast enough to require its own literature review. Therefore, we recommend the following literature: [59, 84,85,86,87,88] to gain further insights into this matter.

Sensitive Areas of Robot Deployment and Associated Ethical Challenges

There are some areas of robot deployment that can be regarded as potentially more ethically sensitive than others, introducing domain-specific ethical challenges. The use of robots for warfare, sexual pleasure, or to care for vulnerable target groups are some key examples. A more general perspective on robots for warfare is provided by Andreas [89]. Philosopher Robert Sparrow, too, has intensively researched the notion of robot killers (e.g., [90,91,92,93] as well as Sparrow and Lucas, [94] on robots for war at sea), but has also inspired scholarly discourse on sex robots, discussing it in the context of robot rape [95]. Not less ethically sensitive is the issue of robotic assistance in the medical field and carebot use to assist vulnerable end users, such as people with cognitive impairments, children, or seniors. Steil et al. [96] provide valuable insights into the ethical challenges associated with robot deployment in medical settings. In the field of robotic care, robots are employed for the care of elderly people, people with disabilities, and children. These groups can be considered vulnerable due to age, reduced or not yet fully developed cognitive and/or physical abilities, e.g., in the case of dementia (see [97] or [98] on ethical recommendations for assistive robotics in dementia care) or due to ongoing cognitive and/or physical development conditioned by a young age (e.g., [99]). Robots can be very helpful in assisting those groups and/or their caretakers in completing tasks, by monitoring user health and user behavior, and by providing companionship ([64]; for a description of a robotic care system, see [100] or [101]).

However, when closely interacting and co-sharing space with robots in general and with carebots in particular, physical safety of users has to be assured (e.g., [102]). Physical safety is not the only issue that has to be taken into account with regard to the interaction between carebots and humans, though. The topic of ethics in robotic care is widely discussed in the literature. Starting broadly, Manzeschke, [103] provides a general discussion on ethics in robotic care, taking into account the different levels of relations between robots and humans in this context: The robot as a mere tool, the robot as a tool with social capabilities, and the robot as an agent the human develops a relationship to. Especially with regard to the specific relationship between humans and robots in the context of care, Körtner [104] suggests six aspects to consider for an ethical integration of carebots into the user’s lives: First, he proposes deception, understood as the potential of the user to form incorrect ideas of the robot’s abilities with regard to cognition and emotion. Second, he names dignity, referring to the risk of patronizing or infantilization of elderly people (e.g., by giving dementia patients the robot Paro [105] as a toy to play with). Third, he refers to isolation, since robots might replace all human contact. Fourth, he mentions privacy, especially regarding the fact that people who are reliant on care potentially are more willing to sacrifice privacy in favor of care and security. Fifth, he lists safety, which might be more important for elderly people as due to reduced walking stability they might be knocked over by a robot more easily than the younger population. Finally, he suggests vulnerability due to potentially reduced cognitive abilities (e.g., due to dementia) and, therefore, a reduced ability to give consent to interaction with a robot. However, this list is not necessarily exhaustive. Zwijsen et al. [106] propose the following factors as specifically important in the context of robots in elderly care: The personal living environment (encompassing privacy, autonomy, and obtrusiveness), the outside world (encompassing stigma and human contact), and the design of the assistance technology (comprising of individual approach, affordability and safety). Manzeschke et al. [107] argue that the following fields are relevant when comparing elderly users to the general user population: The elderly might have lower financial possibilities than the working populations, there are privacy aspects to acknowledge because more health-related data are collected for elderly people, and shared with doctors and care-givers, they might suffer from reduced mobility, their user involvement and robot acceptance might be lower, and their expectations towards the technology might be different, due to reduced experience with modern technologies, for example. While being rather reserved towards robots in elderly care, Sharkey and Sharkey [64] reflect on ethical challenges with regard to robots caring for the elderly as well and suggest the following aspects that need to be considered: Potential reduction in the amount of human contact of elderly people, increased objectification of dementia patients, privacy issues, loss of personal liberty, deception and infantilization, and the question of who is to control the robots. Due to the heterogeneity of the ethical aspects different authors propose for the use of robots in elderly care, ethical aspects in elderly care might be a topic that is interesting for a review on its own.

Above and beyond problems that have to be examined with regard to robots in elderly care, Riek and Howard [58] extend the discussion to issues to consider when deploying robots in other sensitive fields, such as robots in therapy and general care settings. First, they refer to the problem of using therapeutic robots during research projects, more specifically what happens once the project is finished. Usually, the robots are removed again, which may revoke all benefits the robots brought to the patients, leaving them in a worse state than before. Second, they refer to problems specific for physically assistive robots, namely, help with sensitive tasks as bodily hygiene, and the fact that the users will probably develop an emotional bond to the robots as they might have little contact to other people. They cite works by authors such as Forlizzi and DiSalvo [108], Riek et al. [109], Scheutz [110], and Carpenter [111] to support their claim that no matter the morphology of the platform, a certain extent of bonding will inevitably form. Apart from using robots with varying degrees of autonomy in the care sector, there is also the option of using telepresence robots. Niemelä et al. [112] provide ethical guidelines for using telepresence robots in residential care. Their results showed that sometimes ethical considerations were deemed more important than usability concerns. For example, it was considered crucial that the primary user, i.e., the elderly person needs to maintain control over accepting or rejecting an incoming call via the robot, no matter what the intention of the call was. The participation of family members in health checks or hygiene care by the telepresence robot was considered ethically problematic and, therefore, was advised against. As a telepresence robot offers the possibility to be remotely controlled by a family member or a care worker, the authors argue that the aspect of the invasion of privacy by the robot is even more important than with regard to conventional robots. For a more general discussion on ethical aspects of telepresence robots, we recommend Oliveira et al. [113]).

Ethical Frameworks, Guidelines, and Their Implementation into Robots

Given the vast number of ethical issues to consider when designing robots for the various roles and user groups in current and future societies, it becomes clear that theoretical frameworks and guidelines are called for to bundle the multidisciplinary scholarship on ethics in (social) robotics and HRI. The frameworks and guidelines range from very broad theoretical discussions on the topic of ethics to detailed suggestions for concrete algorithms necessary for robots to behave in ethical ways. Reijers et al. [114•] give an extensive systematic literature review on the methods to incorporate ethics into research and innovation in general.

Veruggio [115] takes general ethical problems into account that are linked to relationship formation between humans and machines (e.g., humanization of the human/machine relationship (cognitive and affective bonds towards machines) [115 , p. 615]) and suggests an ethical framework on the basis of the so-called “PAPA” code of ethics, which is taken from Computer and Information Ethics. The acronym PAPA stands for privacy, accuracy, intellectual property, and access [115]. Privacy deals with the question of which information we must reveal to others under which conditions and protections, and which information we can keep secret. Accuracy refers to the question of responsibility, more specifically, addressing who is responsible to make sure the information is authentic and accurate, and who is responsible if there are errors and damages to repair. The notion of property suggests the question of ownership of information, the fairness of costs of information exchange, potential ways of information exchange, the ownership of said ways, and the regulation of the information exchange. Finally, accessibility comprises the right of a person and/or organization to obtain information, and the surrounding conditions [115]. These ethical recommendations are applicable to all relationships between humans and machines and are not exclusively relevant for robots. Thus, the French advisory commission for the ethics of information and communication technology (ICT) research CERNA (Commission for the Ethics of Research in Information Sciences and Technologies) recommends general ethical standards for robotics, aiming at providing tools and recommendations for research institutions and the associated researchers [116]. CERNA’s recommendations concern all ethically relevant fields, ranging from autonomy and decision-making over imitation of life, affective, and social interaction to robot-aided therapy and human-robotic augmentation. For more specific recommendations for dealing with robots that are growing more and more intelligent, consider Kornwachs [117], for example. Even more specifically, taking assistance robots into the focus, literature refers to the five ethical principles underlying the distribution and use of assistance technology by Kitchener [118], namely, beneficence, nonmaleficence, justice, autonomy, and fidelity. Beneficence is supposed to ensure that that actions lead to results benefiting others. Nonmaleficence is connected to the previously mentioned laws by Asimov [41] and states that no harm should be caused to others. Justice refers to fairness in different contexts, namely, individual, interpersonal, organizational, and societal contexts. Autonomy aims at freedom of action and choice. Fidelity is the principle of behaving in a loyal, trustworthy, faithful, and honest way (also see [119]).

Parallel to the spectrum the literature on the ethical challenges associated to robot deployment covers, the proposed ethical frameworks and guidelines also cover sensitive areas of HRI, such as carework. Accordingly, Riek and Howard [58] formulate “Specific Principles of Human Dignity Considerations”. In their work, they listed 15 principles that must be considered when designing a robot or assistance technology. The principles encompass privacy, emotional needs, physical, and psychological capabilities of the user, predictability of the robot, trust, and more formal issues such as laws and regulations. Some exemplary principles read: “The emotional needs of humans are always to be respected”, “Maximal, reasonable transparency in the programming of robotic systems is required.”, or “Avoid racist, sexist, and ableist morphologies and behaviors in robot design” ([58 , p. 6]. These principles are an important guiding framework for the development of assistance technology maintaining and supporting human dignity. In addition, Misselhorn et al. [120] offer an ethical framework for the use of robots in the care context, illustrating their principles using the therapeutic seal robot Paro. For a general overview of ethical frameworks for the use of robots in elderly care, Vandemeulebroucke et al. [121] provide a systematic literature review on different ethics approaches and/or frameworks addressing the ethical issues of robots in the care sector; additionally, Mansouri et al. [122] offer a more general review on ethical frameworks for assistive devices, especially for usage in elderly care. Finally, Huber et al. [123] take into account the aspect of relationships between humans and robots and suggest the “Triple-A Model” to incorporate ethics in the design of social companion robots. The model covers the aspects assistance, adaptation, and attachment, and is supposed to help with the identification of ethical risks, and potential ethical risks, based on the different interaction levels of companion robots.

Evidently, the literature offers a rich body of research on ethical frameworks and guidelines to facilitate robot uptake in society. Thus, before deploying robot technology in any field whatsoever, it would be wise to conceptualize specific use cases, to take into account diverse user needs (e.g., through participatory design) and to reflect upon short- and long-term implications of the given scenario for the particular user group. In this respect, consulting the user group-specific ethical frameworks can be helpful. Which frameworks are ultimately consulted depends on where the research is actually conducted. To illustrate, Weber [124] compares three frequently used ethics frameworks in German-speaking countries: The “MEESTAR” model (e.g., [125, 126]), action sheets [127], and the ethics canvas [128]. MEESTAR is a model for the ethical evaluation of socio-technical arrangements and should be used by all stakeholders concerned with the usage of the respective technology. The stakeholders are supposed to do a moral evaluation of the technology at hand and incorporate the results into the development process. It was originally developed for the ethical evaluation of technology used for elderly care. Action sheets can be used to adapt the evaluation dimensions in the MEESTAR model for other fields in a systematic way. The ethics canvas is an online tool and can be used to gain an overview over a moral field. Stakeholders are supposed to gather their knowledge and assumptions about different categories, i.e., affected people and/or groups, their relationships, and potential conflicts. This way, the expertise of all people potentially involved with the technology can be taken into account.

The ethical frameworks for different contexts of robot use give concrete recommendations on how robots should or should not behave in certain situations. However, these recommendations reflect only a theoretical side of ethical robot behavior. Another step is required to make the robots behave as intended in practice, namely, the concrete technological implementation of ethics into robots. Different researchers have developed and tested algorithms that, for example, allow robots to decide in morally ambivalent situations (e.g., [129,130,131,132,133,134,135,136,137]). However, it may be critically discussed whether an implementation of ethics in the form of algorithms is feasible, or even possible. McBride and Hoffman [138 , p.77] argue, that there is an “immense gap [...] between the architecture, implementation, and activity of humans and robots in addressing ethical situations”. They claim that a robot’s ethical capabilities are reduced to decisions in simple environments, while a human’s ethical capabilities are much more complex. Therefore, they suggest that, instead of applying the same ethical fundamentals used to guide human behavior to robots, it is necessary for humans and robots to communicate on ethics and explore the field together, to come to a new form of guidelines for ethical robot behavior. As approaches towards machine learning are especially relevant for fields in which the programming of concrete algorithms is out of scope due to the complexity of the task, aiming at a shared exploration of ethical situations might be a feasible solution to help transform robots into ethical agents.

Summary and Conclusion

Taken together, we demonstrated that roboethics is a highly complex and increasingly important topic with a vast amount of literature and discussion to examine. To give a starting point, the current review featured the societal and ethical issues in human-robot interaction concentrating on advancements within the last five years. The topics discussed range from general ethical issues that emerge from the introduction of robots into human lives and human society to very concrete ethical issues for specific contexts, such as robots in the care sector. An overview of ethical frameworks and guidelines and their technological implementation into robots aims at providing answers to the open ethical questions. It is imperative for the successful integration of robots into society and into our homes that ethical issues are considered in robotics research, robot development, and the deployment of robots in their various fields of use. Therefore, ethics in robotics is not only highly relevant for the scientific community, but also for developers, technicians, and prospective end users. Given the rapid technological development, there is a high probability that one day, robot co-share our daily lives. Until then and beyond, ethics in (social) robotics and HRI will remain a crucial, if not, inevitable field of multidisciplinary scholarship, providing rich resources to ameliorate human interactions with novel technologies.