Skip to main content
Log in

In search of the moral status of AI: why sentience is a strong argument

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

Is it OK to lie to Siri? Is it bad to mistreat a robot for our own pleasure? Under what condition should we grant a moral status to an artificial intelligence (AI) system? This paper looks at different arguments for granting moral status to an AI system: the idea of indirect duties, the relational argument, the argument from intelligence, the arguments from life and information, and the argument from sentience. In each but the last case, we find unresolved issues with the particular argument, which leads us to move to a different one. We leave the idea of indirect duties aside since these duties do not imply considering an AI system for its own sake. The paper rejects the relational argument and the argument from intelligence. The argument from life may lead us to grant a moral status to an AI system, but only in a weak sense. Sentience, by contrast, is a strong argument for the moral status of an AI system—based, among other things, on the Aristotelian principle of equality: that same cases should be treated in the same way. The paper points out, however, that no AI system is sentient given the current level of technological development.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. The software agent, developed by Apple Inc., that can answer questions, make recommendations, and perform actions using voice queries and a natural language interface (Hoy 2018).

  2. The notion of having a moral status is very close to the notions of having moral standing, having moral considerability, and/or having moral patiency. A deeper analysis could reveal some differences among these notions, but we will use them interchangeably in this paper and we will prioritize the expression moral status when possible. Let us also note that this question of the moral status of AI belong to the fields of AI ethics and robot ethics, which are concerned not only with moral agency but also with moral patiency (Loh 2018).

  3. While a system such as a computer may also have a physical dimension (a screen, a case, electric circuits, etc.), its useful product may be only virtual: a hash value, the solution to a facial recognition request, a risk prediction, and so on.

  4. Sparrow (2012) himself considers this test to be an explanatory thought experiment, rather than a discriminatory test like the Turing test.

  5. That is why we should be careful when using moral dilemmas, such as the Turing test triage, to look at the moral status of an entity. It is totally conceivable that you have stronger reasons to save something without moral status (like a work of art) than an entity with a moral status (like a plant or an animal). Likewise, you may have stronger reasons to save a child than an elderly person, but that does not mean that the elderly have no moral status (of course they do).

  6. Building on this distinction, Hogan (2017) contested the claim that the machine question is the same as the animal question. She claims that the latter is patient-centered while the former originates from agency. The fact that robots may be moral agents before being patients would prevent us from addressing the question of the moral status of AI using the same arguments we would use for animals. However, we do not see why different types of entities should be considered differently because they acquired moral agency or patiency in a different order. An AI system can have moral status without being a moral agent if we can have reasons to act for its own rights and for its own sake.

  7. It should be pointed out that some animal ethicists, such as Korsgaard (2018, 102–5), criticize indirect duties on the basis “that it is almost incoherent.” Or, at least, the idea of indirect duties invites us to have an attitude that is incoherent because we are compelled to treat animals with gratitude, or love, yet also to detach this attitude from any moral concern about animals.

  8. Note that there are at least two different theses in the idea of an indirect duty to an animal or an AI system. First, this implies that we owe the duty to treat an animal well to ourselves or other humans, rather than directly to the animal. Second, this implies that the basis of the duty resides in the effects on human behavior, dispositions, or moral character, and not on the effects on the animal. As pointed out by Korsgaard (1996, 101 ff.), these two theses are separate, logically speaking: we might owe it to ourselves, not directly to animals, to treat them well, and nonetheless the duty could be to treat animals well for their own sake, rather than for the effect it would have on humans.

  9. Little (1999) makes a similar argument to the one made by Sherwin, though her focus is on abortion specifically. See also the work of Hester et al. (2000) for the application of the relational approach to questions of environmental ethics (for example, the moral considerability of land-related entities).

  10. His most recent work takes a somewhat different direction, however, framing the issues in terms of the debate between normative and descriptive claims in moral philosophy, and suggesting that we should “deconstruct” the “is-ought inference” using the work of Emmanuel Levinas (Gunkel 2018, 159).

  11. See also Korsgaard (2018, 102–5) and Thomas Scanlon (1998, 164–65). Although their work is not rooted in a relational perspective, they provide further elaborations on this idea.

  12. On the importance of universalizability in moral philosophy and the challenges associated with nonconsequentialist approaches, see Pettit (2000).

  13. For other critical perspectives on the relational approach, see Anne Gerdes (2016).

  14. Contemporary accounts of similar views can be found in the work of Quinn (1984) and Stone (1987).

  15. Legg and Hutter (2007) even propose a formalization of this “universal intelligence,” which may be applied to measure a machine’s intelligence—implying, among other things, a reward function and the principle of Occam’s razor.

  16. For an overview of the arguments on the importance of treating incapacitated humans with decency, see the work of Hill (1993), Frankena (1986) and Cranor (1975), among others.

  17. More intelligent humans often have a higher social status, though, which also tends to be heavily criticized. The egalitarian trend of thought, exemplified by political philosophers such as John Rawls (1971), argues that we should reduce, as much as possible, the effect of natural endowments (including a genetic predisposition to intelligence) on people’s liberties and opportunities in life. Under this formulation, intelligence is also considered an irrelevant condition for higher social status in addition to being an irrelevant condition for moral status.

  18. However, it has to be admitted that artificial life is generally perceived as a simulation of life rather than as an authentic form of life (Boden 1996).

  19. See also Brey (2008) for other arguments against Floridi’s ontocentrism. Brey rejects the idea that everything that exists has an intrinsic moral worth, but he suggests that inanimate things have a potential extrinsic, instrumental, or emotional value for persons. This argument falls back to something similar to either indirect duties or the relational argument we discussed in the previous sections of this paper.

  20. Neely (2014) goes further and invites us to include all beings that have an interest, a criterion that she considers to be more inclusive than sentience.

  21. This theory has the advantage of respecting our epistemic limits—the main disadvantage being that it implies that we ought to treat philosophical zombies as human beings, which is another problem that would need to be further discussed.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dominic Martin.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gibert, M., Martin, D. In search of the moral status of AI: why sentience is a strong argument. AI & Soc 37, 319–330 (2022). https://doi.org/10.1007/s00146-021-01179-z

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-021-01179-z

Keywords

Navigation