Skip to main content
Log in

Machines and the Moral Community

  • Special Issue
  • Published:
Philosophy & Technology Aims and scope Submit manuscript

Abstract

A key distinction in ethics is between members and nonmembers of the moral community. Over time, our notion of this community has expanded as we have moved from a rationality criterion to a sentience criterion for membership. I argue that a sentience criterion is insufficient to accommodate all members of the moral community; the true underlying criterion can be understood in terms of whether a being has interests. This may be extended to conscious, self-aware machines, as well as to any autonomous intelligent machines. Such machines exhibit an ability to formulate desires for the course of their own existence; this gives them basic moral standing. While not all machines display autonomy, those which do must be treated as moral patients; to ignore their claims to moral recognition is to repeat past errors. I thus urge moral generosity with respect to the ethical claims of intelligent machines.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. For instance, the National Institute of Health (2013) has recently designated chimpanzees as inappropriate for most forms of animal research, since they are our closest relatives and “are capable of exhibiting a wide range of emotions; expressing personality; and demonstrating individual needs, desires, and preferences.” The sort of clear distinction between human and nonhuman animals once thought to exist is increasingly being challenged, giving rise to new ethical implications.

  2. Obviously, there is clarification required to specify what constitutes unnecessary suffering and exactly how much moral standing animals have. However, sentience suffices to give them a foot in the door of the moral community, so to speak.

  3. The ownership of an object could be the community as a whole, such as with public art installations. If someone were to destroy the Vietnam Veteran’s Memorial, one could argue that it would cause harm to the public (which has a claim on the memorial) and is thus morally wrong. It would be odd to say that you had morally wronged the monument itself, however.

  4. I am concerned in this paper with what it takes for a machine to be deserving of rights and hence be a moral patient. I leave open the question of what it would take for a machine to have moral responsibilities and thus be a moral agent.

  5. This action might be justified if it were done out of a different motivation. Even if I lack his consent, deliberately stepping on his foot might be acceptable if it prevented a greater harm (such as stepping into the path of a vehicle.) However, this is a rather different case than interfering with another’s body simply because it entertains me.

  6. This is why it would, for instance, be wrong to take pornographic photos of a person in a persistent vegetative state; we believe that a person can be harmed even if he or she is unaware of it.

  7. One could also justify suicide this way for some cases, since my interest in bodily integrity could be outweighed by an interest in avoiding large amounts of suffering from a terminal disease, say. While we have an interest in bodily integrity, it is not the only interest that matters.

  8. We see this both in Kant (1786/1996) with the view of rational beings as ends in themselves and in Mill (1859/1993) with the emphasis on individual liberty.

  9. While I will not rehearse the arguments for each ethical theory in detail, note that ignoring a person’s desires for his life will fail to calculate the utility/disutility generated by particular actions, will treat the person as a means to an end, is certainly not something rational people are likely to consent to from behind a veil of ignorance, and demonstrates a lack of care, compassion, and benevolence. None of these ethical theories will condone simply ignoring the desires of a person, although they will almost certainly allow us to take actions counter to those desires in many cases.

  10. This is one reason why advance directives are important, even if fraught with complications: they allow a person to express her wishes in advance to cover circumstances (such as being in a coma) where she cannot do so directly.

  11. An interesting discussion of the connection between self-awareness and moral standing (or personhood, as she puts it) can be found in the discussion of personhood and abortion of Warren (1973) as well as in Scruton (2006).

  12. It might be that consciousness is also unnecessary for having interests, particularly if we consider an objective list view of welfare, as Basl (2012) notes. Hence, the category of moral patients may extend slightly further than I argue for here; any machine with interests will count, although I am only arguing here that conscious and self-aware machines have interests.

  13. I believe that we are more likely to recognize as conscious a machine which has a robust consciousness since that consciousness is more like our own and thus more apt to display behaviors which match up with the conscious behaviors of humans. It is far from clear how we would ever determine that machine had an awareness of colors if that were the full extent of its consciousness. Hence, while we may create such limited machines, I suspect we will not realize we have done so.

  14. Ruffo (2012) would likely object to this conclusion as she believes that machines are not things which are capable of well-being or ill-being because they lack human feelings. I find this unconvincing for two reasons. First, I believe you could create a case which paralleled the congenital analgesia example and argue that it is still wrong to harm such a person even if she lacked emotion. Second, it is not clear to me why she assumes that we will never be able to create machines which have emotions. It is true that we cannot currently do so, but there was a time when everyone was certain a machine would never be able to play chess. This has, of course, proven false; as such, I find our current capabilities to be poor predictors of future ability.

  15. They provide a formal definition (Legg and Hutter 2007); however, space does not permit the detailed exposition required to fully explicate this definition.

  16. I am using “autonomy” in the sense typical of ethics, meaning something akin to “being able to make one’s decisions free of external influence or control;” the term is (confusingly) used somewhat differently at times in robotics.

  17. Presumably, the machine is not sentient, or we could have had a much shorter argument for moral standing; as such, it cannot gain moral rights through an appeal to sentience. One might try to argue that such a being has rationality and thus, on some views of morality at least, must be granted moral standing. I am not convinced this is the case; while Kant sees morality as shared by rational beings, he makes it clear that the kinds of beings he is discussing have a will—the machines, as I have described them, do not (Kant 1786/1996). In general, I believe that the rationality criterion for moral standing is more complex than simple intelligence, and machines with bare intelligence will likely not satisfy it.

  18. It is not clear whether such a machine currently exists; I suspect it does not yet, although the evolution of drone technology seems to be heading us in this direction.

  19. While the choices may be influenced by the programming of the machine, human choices are also influenced by upbringing, societal pressure, brain chemistry, and so forth. Since moral theorizing generally views human autonomy as worth preserving despite these factors, machine autonomy likewise has worth.

  20. One might also make the argument that autonomy itself is sufficient for granting something moral standing. If we view autonomy as a good, then the fact that such machines exhibit autonomy suffices to grant them at least some consideration. We may place limits on the expression of their autonomy, just as we do for people, but we likely could not simply ignore it.

  21. Note that this argument is separate from the argument of whether such machines could exist. Ruffo (2012) believes that a machine cannot deliberate; any choice it makes would be a result of programming. As such, she would argue that no machine could determine its own goals. While I am unconvinced, it is not necessary for our present purposes.

  22. A machine which is programmed to learn based on past interactions will be somewhere along this continuum, depending on the complexity of its programming; a simple program will likely result in a machine with little autonomy, but a complex program may approach the situation we have with humans. Since we also learn and adapt as a result of our interactions—following social norms, rules we have been taught, biological imperatives, and so forth—a sufficiently complex set of instructions for a machine may model this; if we consider ourselves to be at least somewhat autonomous, we must consider the machine to be as well.

  23. The analogy is somewhat imperfect, since we tend to take children to be beings who will increase in autonomy over time; they have the potential for as much autonomy as fully functioning adults, whereas we generally are not as optimistic about the prospects of the severely mentally disabled. However, I can see the potential for both sorts of machines: there may be some whose autonomy only ever reaches a low level and others whose autonomy develops over time. Hence the two prongs of this analogy are both useful, since I believe our treatment of those machines ought to parallel our treatment of similar humans.

  24. For that matter, we could likely repeat this argument when addressing the question of whether a machine can have a mind, since again such a machine will not share our evolutionary history and so forth.

  25. Think of this as the moral equivalent of the Turing Test: if the machine’s behavior is indistinguishable from a human’s behavior in most situations, then there is a prima facie case for treating it similarly. This argument is used by Singer (2002) to argue for our assumptions of sentience both in other people and in animals. A similar line of thought has been developed by Sparrow (2004, 2012) in trying to determine when we would view a machine as similar enough to a human to warrant the same moral standing.

  26. This concern has been echoed by Torrance (2012), although he seems more sympathetic to the dangers of mistakenly denying rights to machines which deserve them.

  27. There are already many researchers involved in trying to create intelligent machines, for instance via The Mind Machine Project at MIT. Furthermore, there has been a great deal of discussion about what consciousness or self-awareness in a machine would entail. For a number of optimistic outlooks on the matter, see Long and Kelley (2010), O’Regan (2012), and Gorbenko et al. (2012).

  28. This is why presumably no matter what decision one makes in the trolley case, one is acting unethically if she fails to consider the humanity of all of the people involves. Simply ignoring the personhood of any of the individuals involved is not an ethical move, no matter how much simpler it would make the scenario.

  29. Some claim that this argument could be used to extend rights to a fetus. However, I think it clear that a fetus does not at the time it is a fetus act like me in a wide range of situations; we weigh the probability of its personhood as less than that of an adult human, although how much less will depend on the individual.

  30. It is probably possible also to defend granting moral standing to such machines on a rationality-based understanding of the moral community, however as I am sympathetic to the criticisms of such theories, I shall not attempt to do so here.

  31. This touches on questions relevant to moral agency as well, since as people have noted (Asaro 2012), having legal responsibility would require us to be able to punish a machine which failed in its legal responsibilities; this requires us to know whether and how it is possible to do so.

  32. I say “theoretically” since, in practice, the change of nationality is fairly difficult; most people are pragmatically limited to the nationality of their birth, regardless of having a human right to change it.

  33. One could object that, speaking precisely, such entities will likely not be wholly virtual. Rather, they may well require the existence of physical objects in the same way that computer viruses require physical machines on which to reside; their existence is not independent of physical objects. However, the identity of the virus or the machine is quite distinct from the physical object(s) they depend on in a way unlike our experience of other identities; if they are embodied, it is in a very different sense than we currently understand.

  34. There is, of course, debate about whether this is a good precedent to have set. The point remains, however, that we have dealt with nonhuman persons in the law before; it is not entirely new territory.

  35. This is similar to questions raised by cloning a person without permission.

  36. As with any other member of the moral community, those rights may be overridden if necessary.

  37. See Floridi’s presentation of this conundrum (Floridi 2005) and an attempt to devise a test for self-consciousness in response (Bringsjord 2010).

References

  • Asaro, P. (2012). A body to kick, but still no soul to damn. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: the ethical and social implications of robotics. Cambridge, USA: MIT Press.

    Google Scholar 

  • Basl, J. (2012). Machines as Moral Patients We Shouldn’t Care About (Yet): The Interests and Welfare of Current Machines. In D. J. Gunkel, J. J. Bryson, and S. Torrance (Eds.), Proceedings of the AISB/IACAP World Congress 2012: The Machine Question: AI, Ethics and Moral Responsibility. Birmingham, England.

  • Bentham, J. (1996). An introduction to the principles of morals and legislation. J.H. Burns and H.L.A. Hart (Eds.) New York: Oxford University Press.

  • Bringsjord, S. (2010). Meeting Floridi’s challenge to artificial intelligence from the knowledge-game test for self-consciousness. Metaphilosophy, 41, 292–312.

    Article  Google Scholar 

  • Bryson, J. (2010). Robots should be slaves. In Y. Wilks (Ed.), Close engagements with artificial companions: key social, psychological, ethical and design issues. USA: John Benjamins.

    Google Scholar 

  • Code, L. (1991). Is the sex of the knower epistemologically significant? In: What can she know?: Feminist theory and the construction of knowledge (pp. 1–26). Ithaca, USA: Cornell University Press.

    Google Scholar 

  • Floridi, L. (2005). Consciousness, agents and the knowledge game. Minds and Machines, 15, 415–444.

    Article  Google Scholar 

  • Gorbenko, A., Popov, V., & Sheka, A. (2012). Robot self awareness: exploration of internal states. Applied Mathematical Sciences, 6, 675–688.

    Google Scholar 

  • Gunkel, D. J. (2012). A Vindication of the Rights of Machines. In D. J. Gunkel, J. J. Bryson, and S. Torrance (Eds.), Proceedings of the AISB/IACAP World Congress 2012: The Machine Question: AI, Ethics and Moral Responsibility. Birmingham, England.

  • Kant, I. (1996). Groundwork of the metaphysics of morals. In M. Gregor (Ed.), Practical philosophy. Cambridge, UK: Cambridge University Press.

    Google Scholar 

  • Legg, S. & Hutter, M. (2006a). A collection of definitions of intelligence. In: Goertzel, B. (Ed.), Proc. 1st Annual Artificial General Intelligence Workshop.

  • Legg, S. & Hutter, M. (2006b). A formal measure of machine intelligence. In Proc. Annual Machine Learning Conference of Belgium and The Netherlands. Ghent, Belgium.

  • Legg, S., & Hutter, M. (2007). Universal intelligence: a definition of machine intelligence. Minds and Machines, 17, 391–444.

    Article  Google Scholar 

  • Long, L. N., & Kelley, T. D. (2010). Review of consciousness and the possibility of conscious robots. Journal of Aerospace Computing, Information, and Communication, 7, 68–84.

    Article  Google Scholar 

  • Mill, J. S. (1993). On liberty and utilitarianism. NY, USA: Bantam.

    Google Scholar 

  • Mills, C. (1999). The racial contract. Ithaca, USA: Cornell University Press.

    Google Scholar 

  • National Institute of Health. (2013). Council of Councils Working Group on the Use of Chimpanzees in NIH-Supported Research Report. http://dpcpsi.nih.gov/council/pdf/FNL_Report_WG_Chimpanzees.pdf. Accessed: 6 March 2013.

  • O'Regan, J. K. (2012). How to build a robot that is conscious and feels. Minds and Machines, 22, 117–136.

    Article  Google Scholar 

  • Piot-Ziegler, C., et al. (2010). Mastectomy, body deconstruction, and impact on identity: a qualitative study. British Journal of Health Psychology, 15, 479–510.

    Article  Google Scholar 

  • Ruffo, M. (2012). The robot, a stranger to ethics. In D. J. Gunkel, J. J. Bryson, and S. Torrance (Eds.), Proceedings of the AISB/IACAP World Congress 2012: The Machine Question: AI, Ethics and Moral Responsibility. Birmingham, England.

  • Scruton, R. (2006). Animal rights and wrongs. London, UK: Continuum.

    Google Scholar 

  • Singer, P. (2002). Animal liberation. USA: ECCO.

    Google Scholar 

  • Sparrow, R. (2004). The Turing Triage test. Ethics and Information Technology, 6, 203–213.

    Article  Google Scholar 

  • Sparrow, R. (2012). Can machines be people? In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: the ethical and social implications of robotics. Cambridge, USA: MIT Press.

    Google Scholar 

  • Taylor, A. (1996). Nasty, brutish, and short: the illiberal intuition that animals don’t count. The Journal of Value Inquiry, 30, 265–277.

    Article  Google Scholar 

  • Torrance, S. (2012). The centrality of machine consciousness to machine ethics: Between realism and socialrelationism. In D. J. Gunkel, J. J. Bryson, and S. Torrance (Eds.), Proceedings of the AISB/IACAP World Congress 2012: The Machine Question: AI, Ethics and Moral Responsibility. Birmingham, England.

  • United Nations. (1948). The Universal Declaration of Human Rights. http://www.un.org/en/documents/udhr/. Accessed: 3 Jan 2013.

  • Wallach, W., & Allen, C. (2009). Moral machines: teaching robots right from wrong. Oxford, UK: Oxford University Press.

    Book  Google Scholar 

  • Warren, M. A. (1973). On the moral and legal status of abortion. The Monist, 57, 43–61.

    Article  Google Scholar 

  • Warwick, K. (2012). Robots with biological brains. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: the ethical and social implications of robotics. Cambridge, USA: MIT Press.

    Google Scholar 

  • Zack, N. (2002). The philosophy of science and race. New York, USA: Routledge.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Erica L. Neely.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Neely, E.L. Machines and the Moral Community. Philos. Technol. 27, 97–111 (2014). https://doi.org/10.1007/s13347-013-0114-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13347-013-0114-y

Keywords

Navigation