Skip to main content

Advertisement

Log in

Artificial moral and legal personhood

  • Original Article
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

This paper considers the hotly debated issue of whether one should grant moral and legal personhood to intelligent robots once they have achieved a certain standard of sophistication based on such criteria as rationality, autonomy, and social relations. The starting point for the analysis is the European Parliament’s resolution on Civil Law Rules on Robotics (2017) and its recommendation that robots be granted legal status and electronic personhood. The resolution is discussed against the background of the so-called Robotics Open Letter, which is critical of the Civil Law Rules on Robotics (and particularly of §59 f.). The paper reviews issues related to the moral and legal status of intelligent robots and the notion of legal personhood, including an analysis of the relation between moral and legal personhood in general and with respect to robots in particular. It examines two analogies, to corporations (which are treated as legal persons) and animals, that have been proposed to elucidate the moral and legal status of robots. The paper concludes that one should not ascribe moral and legal personhood to currently existing robots, given their technological limitations, but that one should do so once they have achieved a certain level at which they would become comparable to human beings.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. For an excellent overview of the notion of moral personhood against the background of moral agency and patiency, see Gunkel (2012, pp. 39–65).

  2. Koops et al. (2010, p. 517) rightly argue, “Within legal philosophy, moral personhood is often seen as precondition for legal personhood, building on French’s seminal article on the moral personhood of corporations … no serious argument can be made that a ship or a trust fund is either metaphysical or a moral person”.

  3. The concept of legal personhood with respect to intelligent robots has been discussed by Solum (1992), Asaro (2007), Koops et al. (2010), Miller (2015), Schwitzgebel and Garza (2015), Solaiman (2017), Bryson et al. (2017), and Jaynes (2019).

  4. For some additional information on the meaning of the CLRR, see Bryson et al. (2017, pp 275–276).

  5. “Open Letter to the European Commission: Artificial Intelligence and Robotics”, http://www.robotics-openletter.eu/ (Accessed 6 May 2020).

  6. A “trust“is commonly defined as a three-party fiduciary relationship in which the trustor transfers either money or a property to the trustee for the benefit of the beneficiary.

  7. These are not the only theories in the field, but they represent the most common approaches to ascribing moral personhood.

  8. For a substantial, classic analysis of the concept of moral status, based on multiple criteria, see Warren (1997), who eventually argues for seven basic principles, each focusing on a particular property that can (in combination with others) legitimately influence a person’s obligations towards other beings (e.g., pp. 181–184).

  9. Bostrom and Yudkowsky (2014, section “Machines with Moral Status”) comment on Kamm’s view.

  10. In fact, non-natural persons such as corporations do enjoy the status of a “legal person” in law. The underlying reason for this designation, however, is that although only persons can be held accountable for actions, corporations are directed by human beings (after all, companies do not act on their own but always according to the will of a human person or group of persons), so the concept of legal personhood is essentially transferred to the company. If companies could harm other beings without any entity being held responsible, severe injustices would result (e.g., people would fail to receive compensation for harm). The adequacy of this type of reasoning will be briefly examined in the next section.

  11. For a similar view in the context of disability studies, see Sherwin (1991, p. 335): “Persons … are members of a social community which shapes and values them, and personhood is a relational concept that must be defined in terms of interactions and relationships with others.” The disability rights movement shares important features with the so-called robot rights movement, and scholars of the latter movement could learn much from the former about how to present their case more effectively.

  12. For a brief but insightful overview of the ethics of social construction, see Gunkel (2012, pp. 170–175).

  13. By making additional specifications with respect to principles, one is able “to solve the conflicts between (a) differing principles (e.g., nonmaleficence and beneficence) or (b) different interpretations of one principle (e.g., autonomy). Conflicting principles and interpretations should be reconciled against the background of new facts and assumptions in order to solve the moral conflict” (Gordon et al. 2011, p. 297).

  14. The method of balancing is “especially important for reaching judgments in individual cases” (Beauchamp and Childress 2001, p. 18) and should be considered as “the process of finding reasons to support beliefs about which moral norms should prevail” (Beauchamp and Childress 2009, p. 20). Therefore, “balancing has something to do with providing good reasons for justified acts” (Gordon et al. 2011, p. 298).

  15. For example, Solum (1992), Asaro (2007), Matthias (2008), Koops et al. (2010), and Solaiman (2017).

  16. For example, Solaiman (2017, p. 161) adheres to the mainstream view, claiming, “In a nutshell, the requirements or attributes of legal personhood are: (1) a person shall be capable of being a subject of law; (2) being a legal subject entails the ability to exercise rights and to perform duties; and (3) the enjoyment of rights needs to exercise awareness and choice.”.

  17. For example, Koops et al. (2010, p. 532) claim, “Building on Solum’s discussion of constitutional rights for AIs, we think that, as long as the behavior of computer agents is ultimately syntactical, based on correlations that have no meaning because the system has no consciousness of the world around it, we cannot grant posthuman rights and liberties that presume the capability to reflect upon one’s actions, initiate intentional action, and take responsibility. For the same reasons, it does not make sense to hold contemporary computer agents liable on the basis of culpable and wrongful action.”.

  18. Civil law and criminal law are different legal areas. The European Parliament’s Resolution on Civil Law Rules on Robotics concerns the former, in which disputes involving individuals or organizations are resolved and compensation provided to the victim. The latter deals only with crimes and punishments for criminal offenses. It might be argued, however, that different legal areas may use different concepts of legal personhood, and that, therefore, one should not apply a single concept of legal personhood to different legal areas such as civil and criminal law. This particular line of reasoning is premature and misleading. The most basic legal definition of personhood, which applies to any legal area, involves a recognition that the person has rights and duties. In this respect, there is no difference between civil and criminal law (see Naffine 2009). It is not claimed, however, that there are no differences at all, but only that they are not important with respect to the following analysis of legal personhood.

  19. The view that robots may outperform us in intelligence has been forcefully entertained by Bostrom (2014) and Kurzweil (2005). The related question of whether “superintelligent” non-human beings such as robots should have moral and legal rights remains a matter of debate (Gordon 2020; Gunkel 2018). It seems, however, that if robots attained such a high level of intelligence, they should certainly be deemed full legal persons at that point, with all the rights and duties accorded to humans. To fail to grant them that status would be a gross violation of their moral rights, including our moral obligation to grant them legal personhood in view of their capabilities.

  20. These criteria include self-consciousness, intentionality, the ability to be emotionally affected, having interests, and the ability to develop one's own direction independently of the robot’s initial programming (see also Hildebrandt 2011, pp. 516–518).

  21. For example, Hindu idols are seen as legal persons (Solaiman 2017, pp. 167–168), and the Whanganui River (New Zealand), the Ganges and Yamuna Rivers (India), Te Urewera National Park (New Zealand), and the whole ecosystem in Ecuador are accorded legal personhood (Bryson et al. 2017, p. 280).

  22. The following list of publications, though not exhaustive, contains some classic and also novel contributions: on animal rights: Singer (1975); on AI and legal personhood: Solum (1992), Asaro (2007), Hildebrandt (2011), Koops et al. (2010), and Bryson et al. (2017); on legal personhood with respect to human beings, robots, corporations, and animals: Ripken (2009), Hallevy (2010a, 2010b, 2013), Bertolini (2013), Darling (2016), and Solaiman (2017); on the moral and legal status of robots: Wallach and Allen (2010), Anderson and Anderson (2011), Gunkel and Bryson (2014a), and Lin, Abney and Bekey (2014); on bioethics: Gordon (2012); on the moral rights of robots: Gordon (2020) and Gunkel (2018); on disability rights: Gordon and Tavera-Salyutov (2018); on moral status generally: Warren (1997) and Kamm (2007).

  23. Hallevy believes that one should impose criminal liability on intelligent robots that are capable of “strong AI”, given their potential for engaging in punishable behavior. Solaiman (2017, pp. 172–173) and Charney (2015) both criticise this view, arguing that today’s robots lack agency and, therefore, cannot be held criminally liable in such fashion.

  24. Solaiman (2017, pp. 163–167) briefly discusses the three theories.

  25. Solaiman substantiates the claim by stating that “personhood is generally attached to human beings, and although law recognizes personality of corporations in all legal systems, and of idols in some jurisdictions, these latter two are juristic persons composed of human beings one way or another, and they cannot do anything without their human agents. Therefore, the rights and duties relevant to their personality refer basically to those of humans behind them, which stands in stark contrast to the advocacy for robots’ personhood” (2017, p. 175).

  26. For an overview of the pros and cons of treating corporations as moral persons, see Ripken (2009, p. 118–130), who seems in favour of doing so. In addition, the references in Ripken’s footnotes 15 and 16 (on p. 103) indicate authors who argue against and for this position, respectively.

  27. For a critical view, see Bertolini (2013, pp. 227–231), who rejects the analogy between robots and animals mainly because of differences in their ontological status, in that animals are natural and robots are artificial. Bertolini states, “Robots are in fact in some cases compared to domesticated animals, but the reasons for such a claimed similitude are not compelling. Indeed, it has been shown that (weakly autonomous) robots and animals behave—and thus ‘act’ depending on the natural or environmental conditions—without the intervention of a human exerting direct control; yet this does not suffice to equate the two or to force a change in the existing legal paradigm” (2013, p. 227; see also his footnote 76 for further references in support of the analogy). Solaiman (2017, pp. 168–171) briefly discusses the analogy between robots and animals, which he eventually rejects, because animals are unable to perform duties, thereby failing to fulfil one main criterion for legal personhood (2017, p. 175).

References

  • Anderson M, Anderson SL (2011) Machine ethics. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Angwin J, Larson J, Mattu S, Kirchner L (2016) Machine bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. Accessed 20 March 2019

  • Annas G (2004) American bioethics: crossing human rights and health law boundaries. Oxford University Press, New York

    Google Scholar 

  • Asaro PM (2007). Robots and responsibility from a legal perspective. Proceedings of the IEEE, pp. 20–24

  • Beauchamp T, Childress J (2001) Principles of biomedical ethics. Oxford University Press, Oxford

    Google Scholar 

  • Beauchamp T, Childress J (2009) Principles of biomedical ethics. Oxford University Press, Oxford

    Google Scholar 

  • Bertolini A (2013) Robots as products the case for a realistic analysis of robotic applications and liability rules. Law Innov Technol 5(2):214–247

    Article  Google Scholar 

  • Bostrom N (2014) Superintelligence: paths, dangers, strategies. Oxford University Press, Oxford

    Google Scholar 

  • Bostrom N, Yudkowsky E (2014) The ethics of artificial intelligence. In: Ramsey W, Frankish K (eds) The Cambridge handbook of artificial intelligence. Cambridge University Press, Cambridge, pp 316–334

    Chapter  Google Scholar 

  • Bryson JJ, Diamantis ME, Grand TD (2017) Of, for, and by the people: the legal lacuna of synthetic persons. Artif Intell Law 25(3):273–291

    Article  Google Scholar 

  • Calverley DJ (2006) Android science and animal rights: does an analogy exist? Connect Sci 18(4):403–417

    Article  Google Scholar 

  • Cavalieri P (2001) The animal question: why non-human animals deserve human rights. Oxford University Press, Oxford

    Google Scholar 

  • Charney R (2015) Can android plead automatism? A review of When Robots Kill: artificial Intelligence under the Criminal Law by Gabriel Hallevy. Univ Tor Fac Law Rev 73(1):69–72

    Google Scholar 

  • Coeckelbergh M (2014) The moral standing of machines: towards a relational and non-cartesian moral hermeneutics. Philos Technol 27(1):61–77

    Article  Google Scholar 

  • Darling K (2016) Extending legal protection to social robots: the effects of anthropomorphism, empathy, and violent behavior towards robotic objects. In: Calo R, Froomkin MA, Kerr I (eds) Robot law. Edward Elgar, Northampton, pp 213–231

    Chapter  Google Scholar 

  • Dolby RGA (1989) The possibility of computers becoming persons. Soc Epistemol 3(4):321–336

    Article  Google Scholar 

  • Donaldson S, Kymlicka W (2013) Zoopolis. A political theory of animal rights. Oxford University Press, Oxford

    Google Scholar 

  • Dyschkant A (2015) Legal personhood: how we are getting it wrong. Univ Illinois Law Rev 2075:2109

    Google Scholar 

  • Francione GL (2009) Animals as persons: essays on the abolition of animal exploitation. Columbia University Press, New York

    Google Scholar 

  • Fukuyama F (2002) Our posthuman future. Farrar Straus and Giroux, New York

    Google Scholar 

  • Girgen J (2003) The historical and contemporary prosecution and punishment of animals. Anim Law Rev 9:97–133

    Google Scholar 

  • Gordon J-S (2012) Bioethics. Internet Encyclopaedia of Philosophy

  • Gordon J-S (2017) Remarks on a disability-conscious bioethics. In: Pöder J-C, Burckhart H, Gordon J-S (eds) Human rights and disability. interdisciplinary Perspectives. Routledge, London, pp 9–20

    Chapter  Google Scholar 

  • Gordon J-S (2020) What do we owe to intelligent robots? AI Society 35:209–223

    Article  Google Scholar 

  • Gordon J-S, Tavera-Salyutov F (2018) Remarks on disability rights legislation. Equal Divers Incl 37(5):506–526

    Article  Google Scholar 

  • Gordon J-S, Rauprich O, Vollman J (2011) Applying the four principles approach. Bioethics 25(6):293–300

    Article  Google Scholar 

  • Gunkel D (2012) The machine question: critical perspectives on AI, robots, and ethics. MIT Press, Cambridge, Mass

    Book  Google Scholar 

  • Gunkel D (2014) A vindication of the rights of machines. Philos Technol 27(1):113–132

    Article  Google Scholar 

  • Gunkel D (2018) Robot rights. MIT Press, Cambridge, Mass

    Book  Google Scholar 

  • Gunkel DJ, Bryson J (2014a) Introduction to the special issue on machine morality: the machine as moral agent and patient. Philos Technol 27(1):5–8

    Article  Google Scholar 

  • Gunkel DJ, Bryson J (2014b) The machine as moral agent and patient. Philos Technol 27(1):5–142

    Article  Google Scholar 

  • Hallevy G (2010a) The criminal liability of artificial intelligence entities. Social Science Research Network 1–42. http://ssrn.com/abstract=1564096

  • Hallevy G (2010b) Virtual criminal responsibility. Social Science Research Network 1–22. http://ssrn.com/abstract=1835362

  • Hallevy G (2013) When robots kill. Artificial intelligence under criminal law. Northeastern University Press, Boston

    Google Scholar 

  • Hildebrandt M (2011) Criminal liability in a smart environment. In: Duff R, Green SP (eds) Philosophical foundations of criminal law. Oxford University Press, Oxford, pp 507–532

    Chapter  Google Scholar 

  • Jaynes TL (2019) Legal personhood for artificial intelligence: citizenship as the exception to the rule. AI Soc. https://doi.org/10.1007/s00146-019-00897-9

    Article  Google Scholar 

  • Kamm FM (2007) Intricate ethics, rights, responsibilities, and permissible harm. Oxford University Press, Oxford

    Book  Google Scholar 

  • Kant I (2009) Groundwork of the metaphysic of morals. Harper Perennial Modern Classics, New York

    Google Scholar 

  • Kass LR (2002) Life, liberty and the defense of dignity. Encounter Books, San Francisco

    Google Scholar 

  • Koops BJ, Hildebrandt M, Jaquet-Chiffelle DO (2010) Bridging the accountability gap: rights for new entities in the information society? Minnesota. J Law Sci Technol 11(2):497–561

    Google Scholar 

  • Kurki VAJ, Pietrzykowski T (eds) (2017) Legal personhood: animals, artificial intelligence and the unborn. Springer, Berlin

    Google Scholar 

  • Kurzweil R (2005) The singularity is near: when humans transcend biology. Penguin Books, London

    Google Scholar 

  • Lin P, Abney K, Bekey GA (eds) (2014) Robot ethics: the ethical and social implications of robotics. MIT Press, Cambridge

    Google Scholar 

  • Matthias A (2008) Automaten als Träger von Rechten. Plädoyer für eine Gesetzänderung. Logos Verlag, Berlin

    Google Scholar 

  • Mill JS (1998) Utilitarianism. Oxford University Press, Oxford

    Google Scholar 

  • Miller LF (2015) Granting automata human rights: challenge to a basis of full-rights privilege. Human Rights Rev 16(4):369–391

    Article  Google Scholar 

  • Naffine N (2009) Law’s meaning of life: philosophy, religion. Darwin and the legal person. Hart Publishing, Oxford and Portland

    Google Scholar 

  • Richardson K (2019) Special issue: ethics of AI and robotics. AI Soc 34(1):1–163

    Article  Google Scholar 

  • Schwitzgebel E, Garza M (2015) A defense of the rights of artificial intelligences. Midwest Stud Philos 39(1):98–119

    Article  Google Scholar 

  • Sherwin S (1991) Abortion through a feminist ethics lens. Dialogue 30(3):327–342

    Article  Google Scholar 

  • Singer P (1975) Animal liberation. Avon Books, London

    Google Scholar 

  • Singer P (1979) Practical ethics. Cambridge University Press, Cambridge

    Google Scholar 

  • Singer P (2009) Speciesism and moral status. Metaphilosophy 40(3–4):567–581

    Article  Google Scholar 

  • Singer P (2011) The expanding circle: ethics, evolution, and moral progress, 1st edn. Princeton University Press, Princeton

    Book  Google Scholar 

  • Singer P, Cavalieri P (eds) (1993) The Great Ape Project: equality beyond humanity. Fourth Estate Publishing, London

    Google Scholar 

  • Smith B (1928) Legal personality. Yale Law Journal 37(3):283–299

    Article  Google Scholar 

  • Solaiman SM (2017) Legal personality of robots, corporations, idols and chimpanzees: a quest for legitimacy. Artif Intell Law 25(2):155–179

    Article  Google Scholar 

  • Solum LB (1992) Legal personhood for artificial intelligences. N Carolina Law Rev 70:1231–1287

    Google Scholar 

  • Wallach W, Allen C (2010) Moral machines: teaching robots right from wrong. Oxford University Press, Oxford

    Google Scholar 

  • Warren MA (1997) Moral status: obligations to persons and other living things. Clarendon Press, Oxford

    Google Scholar 

Download references

Acknowledgements

I wish to thank Julian Savulescu for discussing an early draft of this paper with me in Toronto in 2018. Furthermore, I thank Nick Bostrom, Andreas Sandberg, and Stuart Armstrong for the helpful comments on a revised draft during my research stay at the Future of Humanity Institute at Oxford in 2019. I further refined the paper during my research stay at Tallinn Law School in 2019, where I benefitted from the input of Tanel Kerikmäe. I also profited greatly from discussions with other colleagues and students during my presentations on the various issues covered in this paper. Last but not least, I wish to thank the anonymous referees for their comments.

Funding

This research is funded by the European Social Fund under the activity ‘Improvement of Researchers’ Qualification by Implementing World-class R&D Projects’, Measure No. 09.3.3–LMT–K–712.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to John-Stewart Gordon.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gordon, JS. Artificial moral and legal personhood. AI & Soc 36, 457–471 (2021). https://doi.org/10.1007/s00146-020-01063-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-020-01063-2

Keywords

Navigation