Keywords

Introduction

RobotsFootnote 1—as it seems—are here to stay.Footnote 2 But with which status, and under what normative conditions? Controversies about the moral and legal status of robots in general, and of humanoid (anthropomorphic) robots in particular, are among the top debates in recent practical philosophy and legal theory (Danaher 2017a; Gunkel 2018; Bryson 2019; Dignum 2019; Basl 2019; Nyholm 2020; Wong and Simon 2020; Andreotta 2020). Quite obviously, the state of the art in robotics and the rapid further development of Artificial Intelligence (AI) raise moral and legal issues that significantly exceed the horizon of classic normative theory building (Behdadi and Munthe 2020). Yet what exactly is the problem?

As robots become increasingly sophisticated, and engineers try harder to make them quasi “sentient” and “conscious” (Ekbia 2008; Torrance 2012), we are faced with AI-embedding systems that are ambivalent by design. They combine properties of tools with seemingly “psychological capacities that we had previously thought were reserved for complex biological organisms such as humans” (Prescott 2017: 142).Footnote 3 Hence there is a growing incentive to consider the ontological status of the robots as “liminal”: robots seem to be “neither living nor simply mechanical” (Sandini and Sciutti 2018: 7:1). Therefore, it is not surprising that humans show inclinations to treat humanoid robots “as more than just tools”, regardless of the “extent to which their machine nature is transparent” (Sandini and Sciutti 2018; see also Rosenthal-von der Pütten et al. 2018). After all, human brains have evolved mainly to understand (and interact with) humans, so they are likely to be easily “tricked” into interpreting human-like robot behaviour “as if it were generated by a human” (Sandini and Sciutti 2018: 7:1). As a matter of consequence, it is time to come to terms with the question of how “intelligent machines” like robots (especially the humanoid ones) should be categorized and treated in our societies (Walch 2019).

Discussions on that issue have so far been dominated by a clear divide: What some find appealing, others deem appalling: “robot rights” and “legal personhood” (“e-personality”) for AI systems.Footnote 4 While Luciano Floridi, Director of the Oxford Internet Institute, declassifies thinking and talking about the “counterintuitive attribution of rights” to robots as “distracting and irresponsible” (Floridi 2017: 4; see also Coyne 1999; Winfield 2007), Forbes Magazine ranks this topic as one of the most important AI ethics concerns in the early 2020s (Walch 2019). However contradicting opinions may be,Footnote 5 the task at stake is undisputed: We need to organize Human–Robot Interaction (HRI) according to ethical and juridical principles that optimize benefit and minimize mutual harm (cf. Floridi 2013; Lin et al. 2014; Nemitz 2018; Scharre 2018; Bremner et al. 2019; Brockman 2019; Loh 2019; Schröder et al. 2021).

My paper takes up this topic from a legal ethics perspective and proceeds in three main steps. It begins with definitions of central terms and an exposition of central aspects under which robots (as AI-embedding machines and AI-controlled agents) can become a topic of moral and juridical discourse. Then follows a brief review of some of the most prominent theses from recent literature on the moral and juridical status of robots. In conclusion, a balanced intermediate result and a modest new proposal are presented and substantiated in recognition of the previous discussion. We will start with definitions.

Definitions and Brief Exposition of the Topic

Rights talk is a prominent issue in both moral and legal theory and praxis. Thus, we need to briefly clarify the different basic meanings that the term “rights” has in the respective contexts.

Ethical discourse on moral norms and rights concerns an enormously broad field of problems and investigation. Yet basically it is an effort in theory building on descriptive or normative abbreviatures of commonly acceptable social conduct (Lapinski and Rimal 2005).Footnote 6 Even from the perspective of some theories of natural law, moral norms and rights are cultural products (like values, customs, and traditions); they represent culturally shaped ideas and principles of what a reasonable grammar of practical freedom should look like (Schröder 2012; Bryson et al. 2017; Spiekermann 2019.

Legal rights and norms differ from purely moral ones by their specifically institutional character. Legal rights and norms “exist under the rules of legal systems or by virtue of decisions of suitably authoritative bodies within them” (Campbell 2001). Following the standard Hohfeldian account (Hohfeld 1913), rights as applied in juridical reasoning can be broken down into a set of four categories (“the Hohfeldian incidents” in Wenar 2015)Footnote 7:

  • Privileges

  • Claims

  • Powers

  • Immunities

In the case of “privileges”, you have a liberty or privilege to do as you please within a certain zone of privacy. As to “claims”, they mean that others have a duty not to encroach upon you in that zone of privacy. “Powers”, in this context, mean that you have the ability to waive your claim-right not to be interfered with in that zone of privacy. And, finally, “immunities” provide for your being legally protected against others trying to waive your claim-right on your behalf.

Obviously, the four above-mentioned categories logically relate to one another: “Saying that you have a privilege to do X typically entails that you have a claim-right against others to stop them from interfering with that privilege” (Danaher 2017b: 1).

The Classical Ontological Stance and the Recent “Relational Turn” in Animal, Robot and Machine Ethics: Mark Coeckelbergh’s Analysis

From the point of view of robot and machine ethics, the question of robot rights refers to the kind of relations we have or will develop with robots as AI-embedding machines.Footnote 8 According to Mark Coeckelbergh,Footnote 9 we need to adapt our practice of moral status ascription to the fact that the number of candidates for moral patients and agents is growing (Coeckelbergh 2011, 2012a; see also Danaher 2019; Birch 1993; Haraway 2008; Hart et al. 2012; Latour 2015). Coeckelbergh criticizes that classificatory thinking in animal and machine ethics is usually one-sidedly property-based: entities are considered in isolation from other entities, thereby reducing ethics to a kind of mechanical thinking.Footnote 10 For Coeckelbergh, this raises three major problems: (1) How can we know which property is sufficient, or at least decisive, for ascribing moral status? (2) How could we indubitably establish that an entity indeed has a particular property S? And finally (3) how could we define really sharp boundaries between different kinds of entities?Footnote 11

Coeckelbergh’s scepticism about property-based ethical classification leads him to plead for an ecological anthropology as proposed by Tim Ingold (Ingold 2000). In Ingold’s “Ecology of Materials”, all entities are considered as nodes in a field of relationships, which can only be understood in relational, ecological, and developmental or growth terms (Ingold 2012). Following Ingold, Coeckelbergh interprets moral standing as an expression of active, developing relationships between entities. Instead of asking what property P counts for moral standing S, the new question is: “How should we relate to other beings as human beings who are already part of the same world as these non-human beings, who experience that world and those other beings and are already engaged in that world and stand already in relation to that world?” (Coeckelbergh 2013a, b). Thus, Coeckelbergh considers relations as basic conditions for moral standing: “The relational approach suggests that we should not assume that there is a kind of moral backpack attached to the entity in question; instead moral consideration is granted within a dynamic relation between humans and the entity under consideration” (Coeckelbergh 2010). On this account, relations are not to be seen as properties, but rather “as a priori given in which we are already engaged, making possible the ascription of moral status to entities” (Swart 2013).

We will return to reviewing this approach more closely when we come to David Gunkel’s relation-based theory of “robot rights”. But let us first turn to the juridico-legal discourse on the robots-and-rights topic, just to add this perspective to what we have seen in moral philosophy.

The Juridical Perspective: The “Accountability Gap” Implying a “Responsibility Gap”

Following Jack M. Balkin’sFootnote 12 clear-cut analysis, there are three key problems that robotics and AI agents present for law:

  • Firstly, there is the problem of how to deal with the emergence of non-human agents in the social worlds of humans. How should we “distribute rights and responsibilities among human beings when non-human agents create benefits like artistic works or cause harms like physical injuries”? The difficulty here arises from the “fact that the behavior of robotic and AI systems is ‘emergent’; their actions may not be predictable in advance or constrained by human expectations about proper behavior. Moreover, the programming and algorithms used by robots and AI entities may be the work of many hands, and may employ generative technologies that allow innovation at multiple layers. These features of robotics and AI enhance unpredictability and diffusion of causal responsibility for what robots and AI agents do” (Balkin 2015: 46).Footnote 13

  • Secondly, there is the problem of the “substitution effect”. What we already see now will become even clearer in the future: “People will substitute robots and AI agents for living things—and especially for humans. But they will do so only in certain ways and only for certain purposes. In other words, people tend to treat robots and AI agents as special-purpose animals or special-purpose human beings. This substitution is likely to be incomplete, contextual, unstable, and often opportunistic. People may treat the robot as a person (or animal) for some purposes and as an object for others” (Balkin 2015: 46).

  • Thirdly, we are not dealing here with a static configuration of challenges to the law. Rather, we are faced with a steadily evolving dynamic field of often disruptive factors. As Balkin put it: “We should not think of essential characteristics of technology independent of how people use technology in their lives and in their social relations with others. […] Innovation in technology is not just innovation of tools and techniques; it may also involve innovation of economic, social and legal relations. As we innovate socially and economically, what appears most salient and important about our technologies may also change” (Balkin 2015: 48f).

The most obvious legal problem with robots that can potentially harm people’s physical integrity or property is an “accountability gap” implying a “responsibility gap”. There are at least two important legal levels at which this issue creates problems: criminal law and civil law (Keßler 2019). Footnote 14

In criminal law only natural persons—in the sense of the law, real, living people—can be held responsible for their actions. At civil law level, on the other hand, legal persons, such as companies, can be included. Robots fall after the current state in neither of the two fields.

Manufacturers of robots can be held responsible if they have been proven to use the wrong materials in construction or have made mistakes in programming: for example, if a self-propelled forklift in a high-level warehouse drops a pallet and is clearly related to faulty programming. For this damage then the manufacturer of the truck should be liable. However, proving such errors is likely to become even more difficult in the future.

The robots of the future will learn independently and will make decisions based on their past experiences. Programmers can and must provide a faultless framework for this. However, if the robots are wrong in their experience-based decisions, manufacturers cannot be held so easily accountable. In general, one could say: as long as the manufacturers have acted with all possible and reasonable care and have made no mistakes, they cannot be guilty according to our criminal understanding.

At the civil law level, there is still the possibility of liability, which does not depend on fault. The manufacturer could then be prosecuted if the damage is due to a fault on the robot due to the manufacturer. Otherwise, it will be difficult to hold someone liable.

If the robots violate their duties and cause damage, they would be responsible, for example, financially. In principle, robots act independently with their experience-based decisions. They just cannot be punished like humans. For example, imprisonment means nothing to robots. Robots do not have their own assets. Maybe one will fund up, with whose funds the penalties for the machines are paid. Or, ultimately, it is the manufacturers who are responsible for the mistakes of their robots. You need to watch the devices, detect and fix errors. In the worst case, they have to call back the robots.

Recent Juridical Tendencies Towards Advocating Legal Personality for Robots

Some legal scholars like Gunther TeubnerFootnote 15 argue in favour of granting rights and legal personality to AI systems, depending on the degree of independence that AI systems are bestowed with. Teubner holds that personification of non-humans is “best understood as a strategy of dealing with the uncertainty about the identity of the other, which moves the attribution scheme from causation to double contingency and opens the space for presupposing the others’ self-referentiality” (Teubner 2006: 497); hence Teubner does not recognize any “compelling reason to restrict the attribution of action exclusively to humans and to social systems, as Luhmann argues. Personifying other non-humans is a social reality today and a political necessity for the future. The admission of actors does not take place … into one and only one collective. Rather, the properties of new actors differ extremely according to the multiplicity of different sites of the political ecology” (Teubner 2006: 497; see also Calerco 2008; Campbell 2011). On Teubner’s account, granting legal personality to AI systems would fill the Accountability Gap, thereby maintaining the integrity of the legal system as a whole and advancing the practical interests of humans (Teubner 2006).

Going beyond Teubner’s view, US legal scholar Shawn J. BayernFootnote 16 has argued that, under US law of limited liability companies (LLCs), legal personality could be bestowed on any type of autonomous systems. By means of a special transactional technique, legal entities (mainly LLCs) in the US could be governed entirely by autonomous systems or other software, without any ongoing necessary legal oversight or other involvement by human owners or members (Bayern 2019). For the time being, however, it seems clear that any autonomous system would probably “lack the basic acumen necessary to take many business decisions” (Turner 2019: 177). Thus, it seems unclear if Bayern’s point of view would be shared by the courts (Turner 2019).

In a recent groundbreaking book called “Robot Rules” (Turner 2019), legal scholar Jacob TurnerFootnote 17 deals with the Bermuda triangle of AI-related legal problems: Who is responsible for harms as well as for benefits caused by AI? Should AI have rights? And last but not least: How should ethical and juridical rules for AI be set and implemented?

Rather than literally formulating “robot rules” See also Coeckelbergh (2014), Turner’s book sets out to “provide a blueprint for institutions and mechanisms capable of fulfilling this role” (Turner 2019: 372). On Turner’s account, there are four reasons for protecting the rights of others that could be applied to at least some types of AI and robots:

  1. 1.

    The ability to suffer

  2. 2.

    Compassion

  3. 3.

    The value of something or somebody to others

  4. 4.

    “Situations where humans and AI are combined” (Turner 2019: 145)

Regarding the “Argument from the Ability to Suffer, it is clear that some sort or degree of “artificial consciousness”Footnote 18 would be a necessary precondition for a claim to legal protection based on the ability to suffer. “Pain”, in this context, could be understood as “just a signal which encourages an entity to avoid something undesirable”; thus defined, it would not be difficult in Turner’s eyes to “acknowledge that robots can experience it” (Turner 2019: 152). Turner’s conclusion here is that “if an AI system was to acquire this quality, then it should qualify for some moral rights” (Turner 2019: 146).

The “Argument from Compassion” works on the premise that we obviously tend to protect certain entities because (and as far as) we have “an emotional reaction to them being harmed” (Turner 2019: 155). In the case of robots, it may suffice that they look as if they were conscious and had the ability to suffer to trigger our psychological tendency to develop feelings and compassion for them.

The “Argument from Value to Humanity” draws on a different observation. It is based on the fact that a whole range of things can be protected by law not because these things have a particular definable use, but “rather for a panoply of cultural, aesthetic and historical reasons” (Turner 2019: 165). These reasons can be deemed to constitute an “inherent value” of the objects at stake. To exemplify this idea, Turner refers to art. 20a of the German Constitution (Grundgesetz) which says: “Mindful also of its responsibility towards future generations, the state shall protect the natural foundations of life and animals”.

Last but not least, there is an “Argument from Posthumanism” referring to hybrid organisms, Cyborgs and “electronic brains”. This argument draws on the fact that in technically enhanced human bodies augmented by AI technology and human minds are not always really “separate”. Rather humans and AI seem to combine to become a symbiotic entity—thus developing into something “greater than just the sum of their parts” (Turner 2019: 167). Accordingly, in these cases the strict distinction between what is human and what is artificial may become increasingly fluid or even obsolete (cf. Turner 2019: 169).

With humans augmented by AI, boundary issues can arise: “when, if ever, might a human lose their protected status? […] What about if 20%, 50% or 80% of their mental functioning was the result of computer processing powers?” (Turner 2019: 168). Probably we could suppose a broad consensus here that augmentation or replacement of human organs and physical functions with artificial substitutes “does not render someone less deserving of rights” (Turner 2019: 168).

While rights in general are social constructions, legal personality is, more specifically, a juridical fiction, created in and through our legal systems. On these grounds, it is up to us to decide to what exactly it should apply and to define its precise content (cf. Turner 2019: 175).

As Turner points out, legal personality—instead of being a single notion—is “a technical label for a bundle of rights and responsibilities” (Turner 2019: 175). It is a juridical “artifice” designed to make sure that “legal people need not possess all the same rights and obligations, even with the same system”. In the case that one country grants (or plans to grant) legal personality to AI systems, this could certainly have a domino effect on other nations (cf. Turner 2019: 180).

On the other hand, arguments are brought forward against granting legal personality to AI systems. One draws on the idea of “Android Fallacy”, meaning the mistaken conflation of the concept of personality tout court with “humanity” as such (cf. Turner 2019: 189).

Another point of departure for rejecting legal personality for AI systems is the fear that robots with e-personality could be (mis-)used and exploited as liability shields by human actors (or whole companies) for selfish motives.

Furthermore, one could argue that robots should not be given rights or even legal personality because they as themselves are unaccountable rights violators (Turner 2019: 193).

David J. Gunkel’s “Robot Rights” (2018)

Unsatisfied by traditional moral theorizing on human–machine relations (Gunkel 2007, 2012), philosopher David J. GunkelFootnote 19 has made a strong case for “robot rights” as different from human rights. The latest and systematically most accomplished versions of his approach are his books “Robot Rights” (Gunkel 2018) and “How to Survive a Robot Invasion: Rights, Responsibility, and AI” (Gunkel 2019). Gunkel’s approach is based on a critique of all traditional moral theorizing in which ontological reflection actually precedes the ethical one: First you ask (and try to clarify) what a certain entity “is”; then you can proceed to the ethical question of whether or not this entity can or should be attributed a certain moral value (Gunkel 2018: 159). Gunkel pleas for thinking otherwise. He wants to “deconstruct” the aforementioned “conceptual configuration” and remix the parameters involved.

Gunkel’s argument draws heavily on Emmanuel Levinas’ assertion that ethics precedes ontology, not the other way around. For Levinas, intersubjective responsibility originates in face-to-face encounters (Lévinas 1990).Footnote 20 On this account, “intersubjective experience proves ‘ethical’ in the simple sense that an ‘I’ discovers its own particularity when it is singled out by the gaze of the other. This gaze is interrogative and imperative. It says ‘do not kill me’. It also implores the ‘I’, who eludes it only with difficulty, although this request may have actually no discursive content. This command and supplication occurs because human faces impact us as affective moments or, what Levinas calls ‘interruptions’. The face of the other is firstly expressiveness. It could be compared to a force” (Campbell 2001).

On Gunkel’s account, this means that “it is the axiological aspect, the ought or should dimension, that comes first, in terms of both temporal sequence and status, and the ontological aspect (the is or can) follows from this decision” (Gunkel 2018: 159).

If one follows the thesis of “the underivability of ethics from ‘ontology’” (Duncan 2006: 277), encountering others and otherness changes its meaning as regards the sequence of challenges associated with it. On Gunkel’s view, “we are initially confronted with a mess of anonymous others who intrude on us and to whom we are obliged to respond even before we know anything at all about them and their inner workings” (Gunkel 2018: 159f). On these grounds, Gunkel advocates proposals to grant robots “with a face” some basic moral rights to be respected by humans in their common social worlds (Gunkel 2018: 171–175).

Gunkel interprets his approach as “applied Levinasian philosophy” (Gunkel 2018: 170), knowing that Levinas never wrote about robots, technology or robotics. As it seems, Gunkel’s “applied Levinasian philosophy” is also inspired by Silvia Benso’s book on the “face of things” (Benso 2000). Accordingly, Gunkel admits some difficulties arising from the application of Levinas’s philosophy to the study of robot rights. He is aware of the fact that Levinas’ ethics exclusively concerns relationships between humans and between humans and god. Therefore, as Katarzyna Ginszt notes in her comment on Gunkel’s hermeutics, applying Levinas’ thought to robots would “require reading Levinas beyond the anthropocentric restrictions of the ‘Other’ that is presupposed to be a human being” (Ginszt 2019: 30). Moreover, Gunkel would have to make sure that this kind of “broadening the boundaries” could not easily be misunderstood as relativistic but could be clearly recognized as “relational” in Coeckelbergh’s sense (Ginszt 2019: 30). Closely read, Gunkel’s formulations meet these requirements.

The EPSRC Paper on “Principles of Robotics”

Nevertheless, Gunkel’s plea for robot rights is generally perceived as bold proposal and has earned much criticism. In the eyes of many robo-ethicists Gunkel goes too far in his account of the ethical implications of having robots in our society (cf. Bryson 2019). Those who do not want to take the “relational turn” in roboethics insist that “Robots are simply not people. They are pieces of technology” (Boden et al. 2017: 126). From this point of view, it seems indeed counterintuitive to attribute rights to robots. Rather, it is stringent to emphasize that “the responsibility of making sure they behave well must always lie with human beings. Accordingly, rules for real robots in real life must be transformed into rules advising those who design, sell and use robots about how they should act” (Boden et al. 2017: 125; see also Pasquale 2018).

How this responsibility might be practically addressed is outlined in a paper by the British Engineering and Physical Research Council (EPSRC) called “Principles of Robotics. Regulating robots in the real world” (Boden 2011): “For example, one way forward would be a licence and register (just as there is for cars) that records who is responsible for any robot. This might apply to all or only operate where that ownership is not obvious (e.g. for a robot that might roam outside a house or operate in a public institution such as a school or hospital). Alternately, every robot could be released with a searchable online licence which records the name of the designer/manufacturer and the responsible human who acquired it […]. […] Importantly, it should still remain possible for legal liability to be shared or transferred e.g. both designer and user might share fault where a robot malfunctions during use due to a mixture of design problems and user modifications. In such circumstances, legal rules already exist to allocate liability (although we might wish to clarify these, or require insurance). But a register would always allow an aggrieved person a place to start, by finding out who was, on first principles, responsible for the robot in question” (Boden 2011: 128).

Robots in the Japanese koseki System: Colin P.A. Jones’s Family Law Approach to Robotic Identity and Soft-Law Based Robot Regulation

A completely different, and probably more culture-relative, soft law model for robotic identity and robot registration is based on Japanese family law. This might sound counterintuitive in Euro-American or African contexts where robots are frequently seen as unnatural and threatening. Yet Japan, being in the vanguard of human–robot communication, is different. In Japanese popular culture robots are routinely depicted as “an everyday part of the natural world that coexists with humans in familial contexts” (Yamaguchi 2019: 135). Moreover, there is a political dimension to it. Since 2007, with Prime Minster Shinzo Abe’s “Innovation 2025” proposal, followed by the “New Robot Strategy” of 2015 and the subsequent blueprint for a super-smart “Society 5.0”, also the Japanese government has been eagerly promoting “the virtues of a robot-dependent society and lifestyle” (Robertson 2014: 571). In Abe’s vision—which is futuristic and nostalgic at the same time—robots can help actualize the historicist cultural conception of “beautiful Japan” (Robertson et al. 2019: 34). Robots are seen as a “dream solution to various social problems, ranging from the country’s low birth rate, an insufficient labor force, and a need for foreign migrant workers, to disability, personal safety, and security concerns” (Yamaguchi 2019: 135).

As anthropologist Jennifer RobertsonFootnote 21 reports, nationwide surveys even suggest “that Japanese citizens are more comfortable sharing living and working environments with robots than with foreign caretakers and migrant workers. As their population continues to shrink and age faster than in other postindustrial nation-states, Japanese are banking on the robotics industry to reinvigorate the economy and to preserve the country’s alleged ethnic homogeneity. These initiatives are paralleled by a growing support among some roboticists and politicians to confer citizenship on robots” (Robertson 2014: 571). On this basis, the somewhat odd idea of robots acquiring Japanese civil status became a reality. Japanese family law provided the institutional framework for this.

In Japan, every legally significant transition in a citizen’s life—birth, death, marriage, divorce, adoption, even change of gender—is supposed to be registered in a koseki (戸籍), the registry of a Japanese household’s (ie [家]) members. In fact, it is this registration (which is historically a part of the foundation of civil law and government infrastructure in Japan) that gives legal effect to the above-mentioned events. An extract of a citizen’s koseki serves as the official document that confirms basic details about their Japanese identity and status (Chapman 2008).

Law scholar Colin JonesFootnote 22 sees a basic analogy between Japanese family law and institutional requirements for robot regulation. The crucial point of reference is family law’s concern with parental liability for the interests, torts and crimes of minors. On Jones’ account, many issues of robot law might be “amenable to an approach that sees robots treated analogously to ‘perpetual children’. The provisions on parental liability for harm caused by children … might provide as useful a model for allocating responsibility for robots as anything in products liability or criminal law – if we could just figure out who the ‘parents’ are” (ibid.: 410).Footnote 23

Obviously, this was not too much of a problem for Japanese authorities in the paradigmatic case of Paro, Footnote 24 a therapeutic “mental commitment robot” with the body of an artificial baby harp seal manufactured in Nanto City, Japan. On November 7th 2010, as Jennifer Robertson notes, “Paro was granted it’s own koseki, or household registry, from the mayor of Nanto City, Toyama Prefecture. Shibata Takanori, Paro’s inventor, is listed as the robot’s father … and a ‘birth date’ of 17 September 2004 is recorded. Media coverage of Paro’s koseki was favorable. […] this prototypical Paro’s koseki can be construed as a branch of Shibata’s … household, which is located in Nanto City. Thus, the ‘special family registry’ is for one particular Paro, and not for all of the seal-bots collectively” (Robertson 2014: 590f).

As mentioned earlier, the koseki conflates family, nationality and citizenship. In the case of Caro, that means that, by virtue of “having a Japanese father, Paro is entitled to a koseki, which confirms the robot’s Japanese citizenship” (Robertson 2014: 591). Oddly, the fact that Paro “is a robot—and not even a humanoid—would appear to be less relevant here than the robot’s ‘ethnic-nationality’” (ibid.). Accordingly, not Sophia (a social humanoid robot that became a Saudi Arabian citizen in October 2017) but Paro was the first robot ever to be granted citizenship.

The fact that robots can be legally adopted as members of a Japanese household definitely is an inspiration for robot law theory. Colin Jones has outlined what a “Robot Koseki” would look like if it were systematically contoured as a (Japanese) law model for regulating autonomous machines. His approach to “practical robot law” on providing “definitions that can be used … to establish a framework for robotic identity. Hard or soft laws defining what is and is not a robot would be—should be—the starting point for either applying existing rules to those definitions or developing new rules” (Jones 2019: 418).

That means that a “Robot Koseki” would be essentially informational, yet in a robot law perspective. It would start with differentiating registered Robots from unregistered robotic AI systems, the latter thus remaining “robots without capital R” (W.S.). Unregistered AI systems may have many attributes commonly associated with “robots”. Yet they “would not be Robots for purposes of the registration system, or the rules and regulations tied to it” (Ibid.). That means that, in order to be eligible for koseki registration, “Robots with capital R” (W.S.) would have to meet certain technical and normative criteria, e.g. technical specifications, safety, possible liability nexuses and so forth. In this way a “Robot Koseki” would provide third parties with “assurances” that the registered Robots satisfy certain minimum standards on which “hard and soft law requirements as well as technical rules and regulations” could then be built by governments and private actors (Jones 2019: 453).Footnote 25

Robots registered in the “Robot Koseki” would also be assigned “unique identifying codes or numbers that would become a key part of its identity. Codes identifying members of the same series or production line of robots could also be used. Robot Identification Numbers could even serve as taxpayer identification numbers if the Robot is accorded legal personality and the ability to engage in revenueproducing activities” (Ibid.: 455). In the Internet of Things, the “Robot Koseki” would need to work “in a way so that the current registration details of each Robot were accessible to other technology systems … interacting with it”, either in a centralized or distributed database system (ibid.: 464).

As Jones insists, the “Robot Koseki” would not only entail data about machines. Some of the key registration parameters should also “provide information about people involved in the creation and ongoing existence of the Robot, people who through the system will effectively become a part of the Robot’s identity”, i.e. “the maker (or manufacturer), programmer, owner, and user” of the Robot (ibid.: 465). In Jones’ eyes, this is where the Japanese koseki system provides “a particularly useful model, since it involves the registration of a single unit (the family) that is comprised of multiple constituents. If we are to develop robot law from family law analogies and attempt to regulate Robots as a form of ‘perpetual children’, then the koseki system will make it possible to identify who is analogous to their parent(s)” (Ibid.).

This amounts to suggesting a soft law basis for hard law robot regulation. According to Jones, koseki-style Robot registry would not immediately call for governmental legislation but could be organized primarily by industry action. The registry could start as “a creature of code, of soft law and technical standards”; thus, it would first be driven by “industry players, professional associations or open standards organization comparable to the Internet Engineering Task Force, which has developed many of the rules and standards governing the technical aspects of the Internet” (Ibid.: 461). Based on these standards, then, both hard and soft law requirements as well as technical rules and regulations could be built by governments and private actors (ibid.: 453).

Overall, Jones’ layout of a “Robot Koseki” shows two things: (a) how to solve the problem of “robotic identity” both in a legally compatible and legally effective way without requiring the concept of “robot rights” (in Gunkel’s sense) as a basisFootnote 26; (b) how to provide a helpful soft-law basis for hard-law robot regulation.

Yet apart from soft law: Which ethical key distinctions should a hard-law robot regulation be based on?

Joanna J. Bryson’s Outline of Roboethics

In a recent paper, AI ethicist Joanna J. BrysonFootnote 27 (who was one of the most influential co-authors of the aforementioned EPSRC paper) has marked two central normative criteria for integrating AI-embedding systems like robots in our society: coherence and a lack of social disruption (Bryson 2018: 15). Bryson’s premises here are that “the core of all ethics is a negotiated or discovered equilibrium that creates and perpetuates a society” and “that integrating a new capacity like artificial intelligence (AI) into our moral systems is an act of normative, not descriptive, ethics” (Bryson 2018: 15). Moreover, Bryson emphasizes that “there is no necessary or predetermined position for AI in our society. This is because both AI and ethical frameworks are artefacts of our societies, and therefore subject to human control” (Bryson 2018: 15).

Bryson’s main thesis on the ethics of AI-embedding system is: While constructing such systems as either moral agents or patients is indeed possible, neither is desirable. In particular, Bryson argues “that we are unlikely to construct a coherent ethics in which it is ethical to afford AI moral subjectivity. We are therefore obliged not to build AI we are obliged to” (Bryson 2018: 15).

So Bryson’s recommendations are as follows: “First, robots should not have deceptive appearance—they should not fool people into thinking they are similar to empathy-deserving moral patients. Second, their AI workings should be ‘transparent’ […]. This implies that clear, generally-comprehensible descriptions of an artefact’s goals and intelligence should be available to any owner, operator, or other concerned party. […]. The goal is that most healthy adult citizens should be able to make correctly-informed decisions about emotional and financial investment. As with fictional characters and plush toys […], we should be able to both experience beneficial emotional engagement, and to maintain explicit knowledge of an artefact’s lack of moral subjectivity” (Bryson 2018: 23).

In my eyes, Bryson’s position is plausible and convincing. Quite obviously, even the most human-like behaving robot will not lose its ontological machine character merely by being open to “humanizing” interpretations. Rather, robots are, and will probably remain, more or less perfect simulations of humans and their agency.

But even if they do not really present an anthropological challenge (cf. Wolfe 1993), they certainly present an ethical one. I endorse the view that both AI and ethical frameworks are artefacts of our societies—and therefore subject to human choice and human control (Bryson 2018). The latter holds for the moral status of robots and other AI systems too. This status is in no way logically or ontologically set; rather, it is, and remains, a choice, not a necessity: “We can choose the types and properties of artefacts that are legal to manufacture and sell, and we can write the legislation that determines the legal rights and duties of any agent capable of knowing those rights and carrying out those duties” (Bryson 2018: 16). To this adds that self-disclosing AI would “help people match the right approach to the right entities, treating humans like humans, and machines like machines” (Bowles 2018: 188; Macrorie et al. 2019).

Conclusion

However, also the relational model sketched by Coeckelbergh and Gunkel 2020 seems to be helpful when it comes to avoiding incoherency and social disruption in ethics systems (see Van Wynsberghe 2012, 2013; van Wynsberghe and Robbins 2014; Wong and Simon 2020) Wong forthcoming. If the claim of ethics is uncompromising in the sense that it concerns action and agency as such (and not only a limited range hereof), one can argue for a normative basic ethical continuum in the behaviour of humans, meaning that there should be no context of action where a complete absence of human respect for the integrity of other beings (natural or artificial) would be morally allowed or even encouraged. This might also help to minimize the risk of being morally deskilled by using technology (Coeckelbergh 2012b; Wong 2012; Vallor 2015).

With that in mind, we could consider AI-embedding machines at least as awe-inspiring. Facing them, we encounter the work and latest evolutionary product of our own intelligence: a culmination of human creativity, we are at the cutting edge of human creativity. And we are allowed to be astonished, in the sense of the thaumázein in Plato’s Theiatetos.Footnote 28 Along these lines one could think of a roboethics based upon what we owe to ourselves as creators and users of such sophisticated technology. Avoiding disrespectful treatment of robots is ultimately for the sake of the humans, not for the sake of the robots (Darling and Hauert 2013).

Maybe this insight can contribute to inspire an “overlapping consensus” (Rawls 1993: 133–172) in further discussions on responsibly coordinating HRI. Re- or paraphrasing Rawls in this perspective could start with a three part argument: (a) Rather than being dispensable, it seems reasonable to maintain the aforementioned normative basic ethical continuum in the behaviour of humans (some would add: and their artefacts); accordingly, we should (b) look for ways to stabilize this continuum, facing up (c) to the plurality of reasonable though conflicting ethical frameworks in which robots and rights can be discussed. If mere coordination of diversity is the maximum that seems achievable here, nothing more can be hoped for than a modus vivendi in the AI and roboethics community. Yet there might also be some “common ground” on which to build something more stable. In a Rawlsian type of “overlapping consensus” for instance, diverse and conflicting reasonable frameworks of AI and roboethics would endorse the basic ethical continuum cited above, each from its own point of view. If this configuration works out, both could be achievable: a reasonable non-eliminative hybridity of the ethical discourse on robots and rights—and the intellectual infrastructure for developing this discourse with due diligence.