Introduction

In a 2017 report, a legal scholar, Radidja Nemar, presented how the majority of civilians living under the constant presence of US-led, remote-operated drones in Yemen suffer from the symptoms of post-traumatic stress disorder. She found the symptoms of mental illness irrespective of whether or not they have lost a loved one from a drone attack. A striking summary of her findings reads as follows:

Our belief, which finds an empirical grounding in this study, is that the simple fact of living under drones has psychological consequences that are no different from those caused by the loss of a relative in a [drone] strike. … The intensity of the suffering is such that we believe it amounts to cruel, inhumane, and degrading treatment of civilians.” (pg. 40 [1])

While there are many factors contributing to the issue of drone use in Yemen (e.g., geopolitical factors, exposure to drone use as weapons), one of the undeniable factors is the fact that the mere presence of robotic systems—albeit being remotely operated by humans—can profoundly impact people’s lives and pose serious harms and ethical issues [2].

At the time of writing, we are at an important juncture where the discussion of ethical, legal, and societal (ELS) issues of two highly related technologies are on the rise—robotics and artificial intelligence (AI). AI ethics in particular is a relatively new field that rapidly gained popularity over the last 5 years. The number of publications mentioning AI ethics saw an explosive, above 11-fold increase between 2015 and 2020 (see Fig. 1). This followed a number of high profile AI ethics issues published in the public media. A notable case is a 2015 ProPublica article highlighting the serious and real-life cases of bias in machine learning algorithms used in the US criminal court system [3]. This has resulted in the fields roboethics and AI ethics finding common ground and enabling efforts, such as sharing and shaping of ethical principles. These efforts are summarized in earlier publications of this journal (see [4, 5]). However, recent trends have also included the treatment of roboethics issues as a subset of issues pertaining to “embodied AI” (e.g., European Union’s report published March 2020 on The ethics of artificial intelligence: Issues and initiatives [6], and an influential review of AI ethics frameworks by Hagendorff [7]).

Fig. 1
figure 1

Web of Science literature search on AI ethics and roboethics between 2005 and 2020. Search queries used are ((“AI” OR “artificial intelligence”) AND (“ethics” OR “ethical”)) and (((“robot” OR “robotics” OR “robots” OR “robotic”) AND (“ethics” OR “ethical”)) OR “roboethics”), respectively. The term roboethics started being used since the First International Symposium on Roboethics in 2004

Both robotics and AI have multiple definitions. For the purposes of this paper, we borrow from Liao [8] and refer to AI as any machine that has cognitive abilities such as thinking and learning. AI ethicists have predominantly been focused on ethical implications of non-embodied, algorithmic systems (e.g., recommender systems) or virtual/digital agents (e.g., chatbots). An AI can be and often is embodied in physical or virtual systems (e.g., autonomous vehicles, virtual avatars, respectively) but embodiment or corporeality is not a necessary component of AI. In contrast, we refer to robots as systems with actuation and sensing capabilities where having a physical body (i.e., corporeal) and co-presence (i.e., being in the same physical space) in our physical environment are necessary conditions.

As the community of roboethicists and AI ethicists continue to cross-pollinate ideas and solutions to similar, and often interconnected challenges, we posit that articulating components of the technologies that give rise to a divergence of ethical issues is especially useful. In this paper, we aim to summarize the set of ethical issues that uniquely arise in the design and deployment of interactive robotic systems. In particular, we highlight how the corporeality, co-presence, and physical interaction modalities afforded by robots pose influence people in ways that pose unique sets of ethical challenges, irrespective of the degree of AI present in the system (see Fig. 2). To do so, we review recent findings in the field of human-robot interaction (HRI) where the impact of interacting with co-present, corporeal robots is highlighted.

Fig. 2
figure 2

An illustrated overlap of technology ethics issues and relevant examples. The dashed area of the diagram highlights the set of issues presented by the physicality of interactive robotic systems, a focus of this paper

We first outline the two different beginnings that mark the launch of the domains roboethics and AI ethics. Subsequently, we highlight recent developments in HRI that articulate how a robot’s physical presence, movement, and capacity to touch pose unique ethical implications compared to our interaction with algorithmic systems. We then outline recent efforts to incorporate ethics in HRI design. We conclude with a summary of how the current community of researchers is framing the study of ethics within HRI.

Ethical Implications Arising from Interaction with Robots

As social animals, we are especially influenced by people around us both directly (e.g., one person influencing another) and indirectly (e.g., social norms, customs [9, 10]). We follow norms, the informal rules that shape and regulate the behaviors of members of a group [11]. Interactive technologies are increasingly part of what shape our behaviors. Notably, Reeves and Nass presented the media equation, illustrating how people treat and are affected by computers and other technological media as though they were people or places [12]. In particular, humans have evolved to be and are naturally influenced by our physical environment [13]. The corporeality and co-presence of robots allow them to leverage this human nature and influence people in a programmable, predictable, and scalable manner in ways distinguished from AI, virtual agents, or other digital technologies. These modalities include a robot’s physical presence and appearance, its capacity to move and perform nonverbal expressions in our physical space, and the capacity to come into physical contact with humans (touch).

As interaction with a variety of digital technologies have increasingly become an everyday part of our lives, decades of research have been dedicated to investigating whether and how human perception and response to artificial agents differ across different embodiments of the agents (e.g., from virtual, telepresent, and physically co-present agents). Until recently, much of the research on human response to different morphology and corporeality of artificial agents have shown conflicting results (Li [14] provides a comprehensive overview on this topic). This has been in part, due to the variety of platforms, tasks, and measurements used in previous studies, making it difficult to compare stimuli in one study to another. Recent work by Hoffman et al. is making headway in providing a theoretical framework to disentangle these contradicting results [15]. In particular, they suggest that humans take their interaction with corporeal and co-present robots more seriously due to the possible capacity of the robots to physically interact and possibly harm people in contrast to virtual agents. We see evidence of this in the way humans respond to robot presence, touch, and movement in situ that present ethical implications. In this section, we highlight these particular modes of interaction and how they relate to ethical issues of interactive, corporeal robots.

Physical Presence and Appearance

As highlighted in the above example of drone use, a robot’s presence in our physical space can have a profound impact on people. This is irrespective of a robot’s degree of demonstrated intelligence or autonomous behavior.

Many studies in HRI have demonstrated that the presence of a robot in the room can have the same effect on humans as having another person in the room (i.e., social facilitation) in terms of task performance and elicitation of feelings [16, 17]. A series of recent experiments by Spatola and colleagues demonstrate how anthropomorphism and robot appearance play a role in the influence robot co-presence has on people, and that such an effect may be tapping into the neurosocial mechanisms that are automatically triggered in our brains [17, 18]. One implication of these findings is that even before considering the design of the interactive behaviors for a robotic system, one must consider the ethics of putting a robot in a room in the first place.

The profound ways in which a robot’s presence impacts human behavior over virtual agents are highlighted in the recent HRI experiments on human conformity to robots [16, 19•, 20]. In traditional experiments on conformity—originally devised by Asch [21]—individual participants, surrounded by a group of confederates, are asked to answer simple questions with one obvious right answer (e.g., “which of the following lines have the same length as one on the left?”). The experiments conducted with groups of humans consistently show that individuals easily resort to the same answers given by the group, even when the group is clearly wrong. HRI replication of the classic experiments suggests that humans also conform to a group of co-present robots (i.e., follow the robots’ suggestions rather than standing by their own beliefs/decisions) [16, 19•, 20]. The results suggest that the co-present robots asserting peer pressure on humans are more effective than virtual agents trying to do the same, and that humans conform to robot suggestions significantly more when the task involves more uncertainties or ambiguities [22, 23].

One of the related factors to co-presence is a robot’s appearance. Numerous studies illustrate that the appearance of a robot affects not only its user acceptance, but also how much users trust the system [24]. Inspired from the rising discussion of dark patterns in human-computer interaction (HCI)—referring to the design of user interfaces that mislead users into taking unwanted actions or decisions for the benefit of another [25]— Lacey and Caudwell articulates that dark patterns exist in HRI designs as well [26]. They highlight that the recurring strategy to design consumer robots to be cute and baby-like in appearance constitutes a dark pattern, because the cute esthetics are used to manipulate and exploit the subsequently generated positive affective responses in users in order to attract user attention, continued interaction with the system, and enable further data collection.

Touch

Some of the ethical issues particular to interactive robotics arise from a robot’s ability to touch and implicate the physical safety of humans. Recent studies indicate that humans seem to perceive a robot’s physical touch in a wide variety of ways, and the notion of appropriateness of touch is starting to be discussed. Law et al., for example, conducted a series of experiments where participants watched a robot touch or not touch a human on the shoulder [27]. They found that while people perceive a robot in the touch condition more trustworthy and interpreted it as exhibiting comforting behaviors, they found it more inappropriate. To date, no unifying theory explains the variety of ways people interpret and respond to a robot touch behavior, and its impact on HRI is still in its infancy. However, appropriateness of robot use of touch as interaction modality with humans is tightly coupled with ethics in ways distinct from algorithmic systems. Development or use of sex robots, for instance, raises issues about acceptable touch behaviors between humans and robots independent of the degree of intelligence onboard the system (see [28] for a systematic review). Arnold and Scheutz articulate this as ethical issues arising from “the primacy and implicit dynamics of bodily perception” [29•]. Given how the treatment of our bodies trigger moral psychological responses in humans, some make an explicit link between ethics of robot use to the religious ethics where the treatment of the human bodies is subject to religious moral rules [30].

Movements and Nonverbal Expressions

Other forms of robot influence on humans have non-obvious points of differentiations from algorithmic systems. Humans perceive the expressivity of nonverbal communication between a robot and a virtual representation of the robot to be similar [15]. Yet, movement in physical space can affect human behaviors distinct from motions depicted on a screen. Entrainment effects observed in HRI are an example. Entrainment is a phenomena where humans (agents) mirror or synchronize elements of behaviors of those around them such as unintentional synchronization of steps between a pair of people walking side by side [31]. In an imitation game involving a NAO robot, Ansermin et al. observed that humans entrain to the rhythmic behavior of the robot unidirectionally and unintentionally [32]. Ciardo et al. also documents similar effects on a robot teaching task, where a human teaching a robot attunes to and mirrors the pauses and delays exhibited in the learning robot behavior [33]. Such movement entrainment in HRI does not require social context as was used in Ansermin et al. [32]. An empirical experiment conducted by Lorenz et al., for example, involved a 7-DoF robotic manipulator that simply performs reach-retract motions on the shared tabletop workspace of a participant [34]. Results from Lorenz et al. [34] illustrate that humans synchronize their goal directed, arm motions to the movement of an articulated robot in a dyadic task, such as slowing down to follow the rhythm of the robot’s motion. Humans also entrain to nonverbal expressions exhibited by robots. This includes affective and gesture-based expressions to protolinguistic behaviors. In 2002, Breazeal outlined how participants quickly mimic not only the body posture and facial expression of the robot, but also change their tone of voice (soothing, scolding, etc.) to regulate the robot’s interactive response [35]. This is one way humans improve the efficacy of our communication with each other and co-present robots. Since then, we have found evidence that humans adopt prosodic behaviors (intonation and intensity of our speech) [27].

We have yet to come across real-life cases where human-robot entrainment effects have been abused in a morally harmful manner. However, it is not far-fetched to envision such cases. For example, collaborative robots in assembly lines could be programmed to speed up the pace of work unbeknownst to the human workers resulting in new workplace injuries, or programmed robot behaviors could inadvertently influence the norms of a particular task or place.

With its corporeality, social robots can also leverage its richer platform for nonverbal expressions to influence people. For instance, social robots can be designed to persuade people into making decisions ranging from food and entertainment choices to making donations [36]. In one study, a video recording of a person and a geminoid—a hyper-anthropomorphic robot doppelgänger of the person—was used to persuade people. In a product marketing task, the authors found that the geminoid was perceived to be just as persuasive as the video recording of a human [37]. Ghazali et al. demonstrated how social responses, such as compliance and reactance, affect user acceptance of robots’ persuasion attempts [38].

The persuasive influence robots can have on people raises ethical issues due to its interference with human agency. Scheutz, for example, raised the concern that interactive robots can lead users to create emotional bonds that can be abused and are unidirectional [39]. In discussing ethics of robots that nudge people toward a choice, Borenstein and Arkin assert that the normative questions about the agency of users subjected to robotic nudge remain even if robots are designed to nudge them for their own good [40, 41•]. They conclude by suggesting to consider questions such as whether the person subject to the nudge is aware of and has control over it in our efforts to evaluate the ethics of a robot that nudges [41•]. Although the context within which robots are used to persuade people remains limited today, the technology has the potential to serve as a programmed medium of influence in the future, thereby helping to establish new norms within society.

Incorporating Ethics into Interactive Robot Design

Philosophers have been identifying ways in which technology is used as a proxy for normative actions and decisions of humans, especially that of the designers [42]. For example, using a simple example of low-hanging bridges in Long Island designed to reinforce socioeconomic disparities, Langdon Winner famously proposed that artifacts (including technologies) have politics [43]. Unlike bridges, a robot poses a different set of ethics challenges owing to its ability to interact with people actively. It can also change its behavior over time, from one version of software upgrade to the next, so-to-speak. Due to the corporeality of robots discussed above, ethical issues that pertain to interactive robotic systems go beyond ensuring that a system is ethically used and an ethical decision be made by the system (issues that are very much shared with the community of AI ethicists), but also how an action is carried out. This implies that incorporating ethics into the design of an interactive robotic system includes the decision to place a robot in a room, what it should look like, and how it should communicate and interact with our physical bodies, objects, and the environment.

In one of the first attemps to tackle ethics issues unique to HRI, Riek and Howard developed a code of ethics for HRI practitioners [44]. In it, they point to some of the above-mentioned issues unique to HRI, such as the need to practice sensitivities in designing touch as interaction modality and decisions regarding robot morphology. The code of ethics states that “Human frailty is always to be respected, both physical and psychological” and to “[a]void racist, sexist, and ableist morphologies and behaviors in robot design.” Since the publication of the code of ethics, there active discussions that took place within the HRI community. This includes hosting of interdisciplinary workshops to bridge the gap between ethics and policy discussions in HRI [45] and to create community guidelines to address possible psychological manipulation and exploitation of humans by HRI [46].

New frameworks to better articulate the ELS issues particular to interactive robotics are being introduced as well. For instance, Payr [47] frames this line of inquiry as interaction ethics. Using a case study, she demonstrates the nature of ethical considerations involved in interaction design, including the detailed decisions in the implementation of turn-taking, topic change, and politeness behaviors on a robot. Webb et al. [48], on the other hand, leverages Lucy Suchman’s pivotal work on situated action and plans to argue for creation of an ethical black box, a system that records HRI for the purposes of reviewing interaction failures and hazards by the system designer/user/operator and informs future design choices. Webb et al. assumes that interaction design for social robots has inherent ethical implications and that it is important to recognize these implications within their specific context.

These developments in roboethics highlight a perspective through which every detail of an interactive robot’s existence and its programmed behaviors can be said to have ELS implications. Noting the level of complexity involved considering ethics in interactive robot designs, some scholars are taking steps to tackle practical issues within the domain. One such development is the investigation of consent in HRI by Sarathy et al. [49]. In Sarathy et al. [49], the authors bridge the meaning of consent from legal scholarship to highlight the challenges of operationalizing consent in HRI and the need to implement consent for even normatively neutral contexts. However, much more remains to be done to guide the many ethics considerations involved in designing interactive robots that influence. This is not to mention the fact that ethics considerations become even more complex when interactive robots use AI—AI ethics issues pertaining to fairness, accountability, and transparency of the algorithm would need to be considered in addition to the above-mentioned issues unique to interactive robotics.

Conclusion

In this paper, we outlined how corporeality and co-presence of robots marks a point of divergence for roboethics from AI ethics. The latest findings in HRI and the growing discussions from the community highlight that the treatment of roboethics issues as a subset of AI ethics has the potential to undermine these unique issues present in the design of interactive robots. To date, among numerous published ethics principles, frameworks, and guidelines discussing AI or robotics, there are only two published standards on ethical design of robotic devices (BS 8611:2016—Robots and robotic devices: Guide to the ethical design and application of robots and robotic systems [50] and IEEE Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-Being [51]). Much more work is needed to navigate the ethics issues particular to the design of interactive robots to help bridge ethics thinking into robot engineering design, technology governance, and coordinated research toward the development of robots with ethics in mind.

While this paper emphasized the differentiating factors between roboethics and AI ethics issues, this is by no means to be taken as a discouragement for cross-pollination of ideas between the two communities. Interdisciplinary conferences such as We Robot and AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society have been a catalyst and inspiration for urgent ethics discussions in the roboethics and AI ethics community. Underlying all of the ELS issues across the corporeality-algorithmic spectrum is the fundamental question about what kind of society we would like to envision for our increasingly technological society. Synergy between the communities is essential for accelerated advancement in the respective domains and to catalyze coordinated action toward responsible engineering practices across the corporeality-algorithmic spectrum.