Abstract
Is it OK to lie to Siri? Is it bad to mistreat a robot for our own pleasure? Under what condition should we grant a moral status to an artificial intelligence (AI) system? This paper looks at different arguments for granting moral status to an AI system: the idea of indirect duties, the relational argument, the argument from intelligence, the arguments from life and information, and the argument from sentience. In each but the last case, we find unresolved issues with the particular argument, which leads us to move to a different one. We leave the idea of indirect duties aside since these duties do not imply considering an AI system for its own sake. The paper rejects the relational argument and the argument from intelligence. The argument from life may lead us to grant a moral status to an AI system, but only in a weak sense. Sentience, by contrast, is a strong argument for the moral status of an AI system—based, among other things, on the Aristotelian principle of equality: that same cases should be treated in the same way. The paper points out, however, that no AI system is sentient given the current level of technological development.
Similar content being viewed by others
Notes
The software agent, developed by Apple Inc., that can answer questions, make recommendations, and perform actions using voice queries and a natural language interface (Hoy 2018).
The notion of having a moral status is very close to the notions of having moral standing, having moral considerability, and/or having moral patiency. A deeper analysis could reveal some differences among these notions, but we will use them interchangeably in this paper and we will prioritize the expression moral status when possible. Let us also note that this question of the moral status of AI belong to the fields of AI ethics and robot ethics, which are concerned not only with moral agency but also with moral patiency (Loh 2018).
While a system such as a computer may also have a physical dimension (a screen, a case, electric circuits, etc.), its useful product may be only virtual: a hash value, the solution to a facial recognition request, a risk prediction, and so on.
Sparrow (2012) himself considers this test to be an explanatory thought experiment, rather than a discriminatory test like the Turing test.
That is why we should be careful when using moral dilemmas, such as the Turing test triage, to look at the moral status of an entity. It is totally conceivable that you have stronger reasons to save something without moral status (like a work of art) than an entity with a moral status (like a plant or an animal). Likewise, you may have stronger reasons to save a child than an elderly person, but that does not mean that the elderly have no moral status (of course they do).
Building on this distinction, Hogan (2017) contested the claim that the machine question is the same as the animal question. She claims that the latter is patient-centered while the former originates from agency. The fact that robots may be moral agents before being patients would prevent us from addressing the question of the moral status of AI using the same arguments we would use for animals. However, we do not see why different types of entities should be considered differently because they acquired moral agency or patiency in a different order. An AI system can have moral status without being a moral agent if we can have reasons to act for its own rights and for its own sake.
It should be pointed out that some animal ethicists, such as Korsgaard (2018, 102–5), criticize indirect duties on the basis “that it is almost incoherent.” Or, at least, the idea of indirect duties invites us to have an attitude that is incoherent because we are compelled to treat animals with gratitude, or love, yet also to detach this attitude from any moral concern about animals.
Note that there are at least two different theses in the idea of an indirect duty to an animal or an AI system. First, this implies that we owe the duty to treat an animal well to ourselves or other humans, rather than directly to the animal. Second, this implies that the basis of the duty resides in the effects on human behavior, dispositions, or moral character, and not on the effects on the animal. As pointed out by Korsgaard (1996, 101 ff.), these two theses are separate, logically speaking: we might owe it to ourselves, not directly to animals, to treat them well, and nonetheless the duty could be to treat animals well for their own sake, rather than for the effect it would have on humans.
Little (1999) makes a similar argument to the one made by Sherwin, though her focus is on abortion specifically. See also the work of Hester et al. (2000) for the application of the relational approach to questions of environmental ethics (for example, the moral considerability of land-related entities).
His most recent work takes a somewhat different direction, however, framing the issues in terms of the debate between normative and descriptive claims in moral philosophy, and suggesting that we should “deconstruct” the “is-ought inference” using the work of Emmanuel Levinas (Gunkel 2018, 159).
On the importance of universalizability in moral philosophy and the challenges associated with nonconsequentialist approaches, see Pettit (2000).
For other critical perspectives on the relational approach, see Anne Gerdes (2016).
Legg and Hutter (2007) even propose a formalization of this “universal intelligence,” which may be applied to measure a machine’s intelligence—implying, among other things, a reward function and the principle of Occam’s razor.
More intelligent humans often have a higher social status, though, which also tends to be heavily criticized. The egalitarian trend of thought, exemplified by political philosophers such as John Rawls (1971), argues that we should reduce, as much as possible, the effect of natural endowments (including a genetic predisposition to intelligence) on people’s liberties and opportunities in life. Under this formulation, intelligence is also considered an irrelevant condition for higher social status in addition to being an irrelevant condition for moral status.
However, it has to be admitted that artificial life is generally perceived as a simulation of life rather than as an authentic form of life (Boden 1996).
See also Brey (2008) for other arguments against Floridi’s ontocentrism. Brey rejects the idea that everything that exists has an intrinsic moral worth, but he suggests that inanimate things have a potential extrinsic, instrumental, or emotional value for persons. This argument falls back to something similar to either indirect duties or the relational argument we discussed in the previous sections of this paper.
Neely (2014) goes further and invites us to include all beings that have an interest, a criterion that she considers to be more inclusive than sentience.
This theory has the advantage of respecting our epistemic limits—the main disadvantage being that it implies that we ought to treat philosophical zombies as human beings, which is another problem that would need to be further discussed.
References
Ackerman E (2018) Robotic tortoise helps kids to learn that robot abuse is a bad thing. IEEE spectrum, March 14 2018. https://spectrum.ieee.org/automaton/robotics/robotics-hardware/shelly-robotic-tortoise-helps-kids-learn-that-robot-abuse-is-a-bad-thing.
Aristotle (2000) Nicomachean ethics. Translated by Roger Crisp. Cambridge: Cambridge University Press.
Bedau MA, Cleland CE (eds) (2010) The Nature of life: classical and contemporary perspectives from philosophy and science. Cambridge University Press, Cambridge
Bloom P, Harris S (2018) It’s Westworld: what’s wrong with cruelty to robots? The New York Times, April 23, 2018, sec. Opinion. https://www.nytimes.com/2018/04/23/opinion/westworld-conscious-robots-morality.html.
Boden MA (1996) The philosophy of artificial life. Oxford University Press, Oxford
Bostrom N, Yudkowsky E (2014) The ethics of artificial intelligence. In: Frankish K (ed). The cambridge handbook of artificial intelligence
Brey P (2008) Do we have moral duties towards information objects? Ethics Inf Technol 10(2):109–114. https://doi.org/10.1007/s10676-008-9170-x
Bringsjord S, Govindarajulu NS (2018) Artificial intelligence. In: Zalta EN (ed) The stanford encyclopedia of philosophy (Summer 2020 edition). https://plato.stanford.edu/archives/sum2020/entries/artificial-intelligence/
Broom DM (2016) Considering animals’ feelings: précis of sentience and animal welfare. Anim Sentience 2016:005
Bryson J. 2019. The past decade and future of AI’s impact on society. In: Towards a new enlightenment: a transcendent decade (Turner-BVVA, pp. 127–169). https://www.bbvaopenmind.com/en/books/towards-a-new-enlightenment-a-transcendentdecade/
Coeckelbergh M (2010) Robot rights? Towards a social-relational justification of moral consideration. Ethics Inf Technol 12(3):209–221. https://doi.org/10.1007/s10676-010-9235-5
Cranor C (1975) Toward a theory of respect for persons. Am Philos Q 12(4):309–319
Danaher J (2017) The symbolic-consequences argument in the sex robot debate. In: Danaher J (ed) Robot sex. The MIT Press, Cambridge
Danaher J (2019) Welcoming robots into the moral circle: a defence of ethical behaviourism. Sci Eng Ethics. https://doi.org/10.1007/s11948-019-00119-x
Darling K (2016) Extending legal protection to social robots: the effects of anthropomorphism, empathy, and violent behavior towards robotic objects. In: Calo R, Froomkin AM, Kerr I (eds) Robot law. Edward Elgar, Cheltenham
Dehaene S, Lau H, Kouider S (2017) What is consciousness, and could machines have it? Science 358:486–492
Ethics Committee on Non-Human Biotechnology (ECHN) (2008) The dignity of living beings with regard to plants. https://www.ekah.admin.ch/inhalte/ekah-dateien/dokumentation/publikationen/e-Broschure-Wurde-Pflanze-2008.pdf
Floridi L (2010) Information: a very short introduction. Oxford University Press, New York
Floridi L, Sanders JW (2002) Mapping the foundationalist debate in computer ethics. Ethics Inf Technol 4(1):1–9. https://doi.org/10.1023/A:1015209807065
Floridi L, Sanders JW (2004) On the morality of artificial agents. Mind Mach 14(August):349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d
Frankena WK (1986) The ethics of respect for persons. Philos Top 14(2):149–167
Gerdes A (2016) The issue of moral consideration in robot ethics. ACM Sigcas Comput Soc 45(3):274–279
Giroux V, Larue R (2015) Pathocentrisme. In: Bourg D, Papaux A (eds) Dictionnaire de la pensée écologique. Presses universitaires de France, Paris
Gonzalez R (2018) Hey Alexa, what are you doing to my kid’s brain? Wired Magazine, 05/11/2018.
Gruen L (2017) The moral status of animals. In: Edward N, Fall Z (eds) The stanford encyclopedia of philosophy 2017. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/fall2017/entries/moral-animal/.
Gunkel DJ (2012) The machine question: critical perspectives on AI, robots, and ethics. The MIT Press, Cambridge
Gunkel DJ (2014) A vindication of the rights of machines. Philos Technol 27(1):113–132. https://doi.org/10.1007/s13347-013-0121-z
Gunkel DJ (2018) Robot rights. The MIT Press, Cambridge
Harvey G (2005) Animism: respecting the living world. Columbia University Press, New York
Heams T (2019) Infravies: Le vivant sans frontières. Le Seuil, Paris
Hester L, McPherson D, Booth A, Cheney J (2000) Indigenous worlds and Callicott’s land ethic. Environ Ethics. https://doi.org/10.5840/enviroethics200022318
Hill TE Jr (1993) Donagan’s kant. Ethics 104(1):22
Hill RK (2016) What an algorithm is. Philos Technol 29(1):35–59. https://doi.org/10.1007/s13347-014-0184-5 (ABI/INFORM Collection)
Hogan K (2017) Is the machine question the same question as the animal question? Ethics Inf Technol 19(1):29–38. https://doi.org/10.1007/s10676-017-9418-4
Hoy MB (2018) Alexa, siri, cortana, and more: an introduction to voice assistants. Med Ref Serv Q 37(1):81–88
Jaquet F, Cova F (2018) Of hosts and men: westworld and speciesism. In: South JB, Engels KS, Irwin W (eds) Westworld and philosophy: if you go looking for the truth, get the whole thing. Wiley-Blackwell, Hoboken, pp 219–228
Jaworska A, Tannenbaum J (2018) The grounds of moral status. In: Zalta EN (ed) The stanford encyclopedia of philosophy (Spring 2018 Edition).
Johnson D, Verdicchio M (2018) Why robots should not be treated like animals. Ethics Inf Technol Arch 20(4):291–301
Kamm FM (2007) Intricate ethics. Oxford University Press, New York
Kant I (1785) Groundwork of the metaphysics of morals. Gregor M (ed). Cambridge University Press. https://doi.org/10.1017/CBO9780511809590
Kant I (1997) Moral philosophy: collin’s lecture notes. In: Lectures on Ethics (Cambridge Edition of the Works of Immanuel Kant), Heath P, Schneewind JB (ed. and trans.), Cambridge: Cambridge University Press, pp. 37–222. Original is Anthropologie in pragmatischer Hinsicht, published in the standard Akademie der Wissenschaften edition, volume 27. https://doi.org/10.1017/CBO9781107049512
Korsgaard CM (1996) Sources of normativity. [S.l.]: Cambridge University Press. http://myaccess.library.utoronto.ca/login?url=http://books.scholarsportal.info/viewdoc.html?id=/ebooks/ebooks1/cambridgeonline/2012-07-31/1/9780511554476.
Korsgaard CM (2018) Fellow creatures: our obligations to the other animals. Uehiro series in practical ethics. Oxford University Press, Oxford
Legg S, Hutter M (2007) Universal intelligence: a definition of machine intelligence. Minds Mach 17:391–444
Little MO (1999) Abortion, intimacy, and the duty to gestate. Ethical Theory Moral Pract 2(3):295–312. https://doi.org/10.1023/A:1009955129773
Loh J (2018) Maschinenethik und Roboterethik. In: Bendel O (Hrsg.): Handbuch Maschinenethik. Wiesbaden: Springer VS (2018). pp 75–93.
Low P et al (2012) The cambridge declaration on consciousness. In: Publicly proclaimed in Cambridge, UK, on July 7, 2012, at the Francis Crick Memorial Conference on Consciousness in Human and Non-Human Animals.
Martin D (2017) Who should decide how machines make morally laden decisions? Sci Eng Ethics 23:951–967. https://doi.org/10.1007/s11948-016-9833-7
Müller VC (2020) Ethics of artificial intelligence and robotics. In: Zalta EN (ed) The stanford encyclopedia of philosophy (Winter 2020). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/win2020/entries/ethics-ai/
Nagel T (1987) What does it all mean? Oxford University Press
Neely EL (2014) Machines and the moral community. Philos Technol 27(1):97–111. https://doi.org/10.1007/s13347-013-0114-y
Nolan J, Joy L (2016) Westworld. HBO. http://www.imdb.com/title/tt0475784/.
Pettit P (2000) Non-consequentialism and universalizability. Philos Quart 50:175–190
Quinn W (1984) Abortion: identity and loss. Philos Public Aff 13(1):24–54
Rawls J (1971) A theory of justice. Belknap Press of Harvard University Press, Cambridge
Scanlon T (1998) What we owe to each other. Belknap Press of Harvard University Press, Cambridge
Sebo J (2018) The moral problem of other minds. Harv Rev Philos 25:51–70. https://doi.org/10.5840/harvardreview20185913
Shepherd J (2018) Consciousness and moral status. Routledge
Sherwin S (2009) Relational existence and termination of lives: when embodiment precludes agency. In: Campbell S, Meynell L, Sherwin S (eds) Embodiment and agency. Pennsylvania State University Press, University Park, pp 145–163
Shue H (1988) Mediating duties. Ethics 98(4):687–704
Singer P (2011) Practical ethics, 3rd edn. Cambridge University Press, New York
Singer P, Sagan A (2009) When robots have feelings. The Guardian, December 14, 2009.
Sparrow R (2004) The turing triage test. Ethics Inf Technol 6(4):203–213. https://doi.org/10.1007/s10676-004-6491-2
Sparrow R (2012) Can machines be people? Reflections on the turing triage test. In: Lin P, Abney K, Bekey G (eds) Robot ethics: the ethical and social implications of Robotics. MIT Press, Cambridge, MA, pp 301–315
Stone CD (1985) Should trees have standing revisited: how far will law and morals reach—a pluralist perspective. Southern California Law Rev 1(1986):1–156
Stone J (1987) Why potentiality matters. Can. J Philos 17(4):815 (Periodicals Archive Online)
Taylor P (1981) The ethics of respect for nature. Environ Ethics 3:197–218
Tegmark M. 2017. Life 3.0: Being Human in the Age of Artificial Intelligence. New York: Knopf.
Victor D (2017) Hitchhiking robot, safe in several countries, meets its end in Philadelphia. The New York Times, December 21, 2017, sec. U.S. https://www.nytimes.com/2015/08/04/us/hitchhiking-robot-safe-in-several-countries-meets-its-end-in-philadelphia.html
Warren MA (1997) Moral status: obligations to persons and other living things. Clarendon Press, Oxford
Wilson S (2002) Indirect duties to animals. J Value Inquiry 36(1):17–27. https://doi.org/10.1023/A:1014972803058
Wood A (2009) Duties to oneself, duties of respect to others. In: Hill TE Jr (ed) The blackwell guide to kant’s ethics. Blackwell, Oxford
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Gibert, M., Martin, D. In search of the moral status of AI: why sentience is a strong argument. AI & Soc 37, 319–330 (2022). https://doi.org/10.1007/s00146-021-01179-z
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-021-01179-z