Avoid common mistakes on your manuscript.
We live in interesting times. Humanity has witnessed unprecedented technological advances with respect to artificial intelligence (AI), which now impacts our daily lives through e.g. our smartphones and the Internet of Things. AI determines the result of our credit and loan applications; in the United States, it often informs parole decisions; and it pervades our work environments.
In recent decades, we have seen the positive effects of AI in almost every area of our lives, but we have also encountered significant ethical and legal challenges in such areas as autonomous transportation, machine bias, and the black box problem. Concerns have also arisen regarding the rapid development and increasing use of smart technologies, particularly with respect to their impact on fundamental rights (Gordon 2020).
This special issue provides an excellent overview of current debates in the realm of AI and law. It contains timely and original articles that thoroughly examine the ethical, legal, and socio-political implications of AI and law as viewed from various academic perspectives, such as philosophy, theology, law, medicine, and computer science. The issues covered include, for example, the key concept of personhood and its legal and ethical dimensions, AI in healthcare, legal regulation of AI, and the legal and ethical issues related to autonomous systems.
In my view, the papers reveal among other things—perhaps not surprisingly—that the current legal system is ill-equipped to solve the hot issues created by the ever-increasing technological advances in AI. In other words, we need proper AI regulation to deal with such present and anticipated issues as machine bias and legal decision making, electronic personhood, and legal responsibility concerning autonomous machines (e.g., autonomous transportation). We could refer to the needed framework as a General AI Law (GAIL). By nature, AI does not stop at national borders; it is inherently global. Therefore, humanity needs a global approach to solve the legal problems that AI poses. Many of the papers in this special issue provide interesting solutions to persistent problems and thereby attempt to shape the ongoing debates quite substantially.
Most domains of human life are, legally speaking, highly regulated. However, today’s attorneys and judges are, for the most part, not quite literate with regard to the implications of AI for law, the legal system, and legal education. To address the changes resulting from the growing application of AI, we must revise our legal curricula. However, one can make effective changes to a system only if one has a proper understanding of the issues at hand. Updating professional legal education in this area will greatly benefit society, since it will enable legal experts to provide better service and to support policymakers in creating the needed GAIL.
It is impossible for me, in this brief editorial, to do justice to all the papers contained in this special issue, but I would like to briefly highlight two important topics that are either explicitly or implicitly addressed in many of the papers. The first topic, which is examined explicitly by several authors, concerns the concept of personhood. Kestutis Mosakas defends, quite convincingly in my view, the traditional consciousness criterion for moral status in the context of social robots, in opposition to some rival approaches including Gunkel’s (2012) famous social-relational approach. Joshua Jowitt, on the other hand, adheres to a Kantian-oriented concept of agency as the basis for legal personhood and thereby offers a moral foundation for the ongoing legal debate over ascribing legal personhood to robots. When reading Jowitt, however, we should keep in mind that the concept of agency necessarily presupposes consciousness, since it seems impossible that an entity that lacks consciousness could be deemed a responsible agent. The reverse is not true; consciousness may, at some point, lead to agency but does not presuppose it.
The concept of personhood is also examined from different vantage points in a joint paper by David Gunkel (from the field of philosophy) and Jordan Wales (from theology). While Gunkel defends his well-known phenomenological approach to moral robots, Wales argues against this approach by claiming that robots are not “natural” persons by definition. This is because they are not endowed with consciousness and are not oriented toward a self-aware inter-subjectivity, which Wales sees as the basis for compassion toward fellow persons. In general, the interesting debate between Gunkel and Wales displays quite prominently the different lines of argumentation with respect to the concept of personhood.
Finally, on this first topic, John-Stewart Gordon provides a substantial analysis of the concepts of moral and legal personhood and also examines their complex relation. He concludes that current robots do not qualify for personhood but that future robots may do so based on their technological sophistication. Gordon, like Jowitt, claims that one should use a uniform criterion to determine the eligibility of all entities for moral status, without making any exceptions—for example, regarding how the entity came into existence. Ultimately, the concept of personhood—whatever that means in detail—is the very foundation of our moral and legal rights. If robots meet this threshold at some point, then it is no longer up to us to decide whether they are eligible for a moral status and rights; they must be viewed as entitled to this eligibility based on their capabilities, independently of our say-so.
This leads us to the second topic that underlies much of the discussion in this special issue—the meaning of moral agency for AI machines. This topic is quite significant with respect to the whole idea of holding intelligent machines or robots morally responsible for their actions. However, many of the papers in this special issue sidestep this point without addressing it directly, either because the authors believe that, at some future point, robots will become moral agents or because their analysis does not require artificial moral agency in the first place. An exception is the provocative paper by Carissa Veliz, who defends the view that algorithms or machines are not moral agents. Her line of reasoning is as follows: Conscious experience or sentience is necessary for moral agency, and since algorithms are not sentient by nature, they are therefore not moral agents. To prove her point, Veliz claims that algorithms are similar to moral zombies, and since moral zombies are not moral agents, one is justified in claiming that the same is true for algorithms. As she states, “Only beings who can experience pain and pleasure can understand what it means to inflict pain or cause pleasure, and only those with this moral understanding can be moral agents.”
My very brief response to Veliz is that, indeed, current intelligent and autonomous machines lack moral agency given their limited capabilities but that this may change over time. Her particular view that sentience is necessary for moral agency is, at least in my view, to some degree misleading, since it would rule out those human beings who reportedly suffer from congenital analgesia and are therefore unable to experience sensations such as pain. Whether such people can fully understand what pain is remains an open question; quite similar to the question of whether people who are congenitally colour-blind can understand what colour vision really is. However, it seems clear that people with congenital analgesia do understand that it is morally wrong to intentionally inflict pain on others. Their understanding seems to be based on their intellectual capacity to imagine what pain could mean for other people, rather than on any personal experience of pain. Therefore, I am rather hesitant to agree that sentience is, in general, necessary for moral agency.Footnote 1
I would like to thank the contributing authors for their excellent and challenging papers, which hold great promise to shape this emerging field significantly. I am also deeply thankful to all referees for their outstanding job in providing detailed and helpful comments. I hope that this special issue will provide a good start for discussing some of our most challenging current legal and ethical problems related to AI. This is not the end; this is the beginning.
Notes
I believe that this is only one possible counterexample among others, but this editorial is not the place to engage in a further response to Veliz’s paper.
References
Gordon J-S (ed) (2020) Smart technologies and fundamental rights. Brill/Rodopi, Leiden
Gunkel D (2012) The machine question: critical perspectives on AI, robots, and ethics. MIT Press, Cambridge, Mass
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Gordon, JS. AI and law: ethical, legal, and socio-political implications. AI & Soc 36, 403–404 (2021). https://doi.org/10.1007/s00146-021-01194-0
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-021-01194-0