Skip to main content
Log in

Machines as Moral Patients We Shouldn’t Care About (Yet): The Interests and Welfare of Current Machines

  • Special Issue
  • Published:
Philosophy & Technology Aims and scope Submit manuscript

Abstract

In order to determine whether current (or future) machines have a welfare that we as agents ought to take into account in our moral deliberations, we must determine which capacities give rise to interests and whether current machines have those capacities. After developing an account of moral patiency, I argue that current machines should be treated as mere machines. That is, current machines should be treated as if they lack those capacities that would give rise to psychological interests. Therefore, they are moral patients only if they have non-psychological interests. I then provide an account of what I call teleo interests that constitute the most plausible type of non-psychological interest that a being might have. I then argue that even if current machines have teleo interests, they are such that agents need not concern themselves with these interests. Therefore, for all intents and purposes, current machines are not moral patients.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. I talk in terms of consciousness rather than intelligence to avoid taking a stand on the relationship between the two. I assume instead that it is possible for a machine to be intelligent without it being conscious.

  2. What exactly that means for the treatment of such beings will be a function of their nature and our relationships to them, just as equal consideration of humans is sensitive to these factors.

  3. It seems plausible that both the capacity for desires and the capacity for preferences requires or is partly constituted by the capacity for attitudes. If this is false, then attitudes can be understood as “attitudes, preferences, or desires” throughout this paper.

  4. On some views of obligation, we might not have any obligation at all if we can’t possibly know what it is; that is all our obligations are evidence and capacity relative. Instead, I will proceed as if our obligations are independent of our epistemic context but that we are excused in circumstances where those obligations are unknowable. See (McMahan 2009), for example, on the distinction between permissibility and excuse.

  5. Others have used the term “intrinsic value” to mean what something similar to what I mean by “moral status” (Floridi 2002). It is worth explicitly saying that this paper is not a defense of the view that machines lack any kind of moral status or intrinsic value. There may be many sources of intrinsic value; however, I argue that they are not moral patients as that term is defined below.

  6. Though these are sometimes conflated (see (O’Neill 2003), inherent worth is here understood to be different than intrinsic value.

  7. We need not take all the interests of all who are morally considerable into account at all times. If a being is morally considerable then we ought to take its interests into account in contexts where we suspect there will be an appreciable impact on that being’s welfare.

  8. I remain neutral, as much as possible, on how interests are to be taken into account and weighed against one another. For example, what constitutes equal consideration of interests and what one’s various interests entitle one to will differ on deontological or consequentialist views.

  9. This use of “moral patient” differs from some other uses of the term. The term is, at least sometimes, used synonymously or nearly synonymously with “moral status” as I have defined it above.

  10. There is an important sense in which being a moral patient is agent relative or, at least, relative to agents of a kind. If there are agents that are radically different than us, for example, they are psychologically incapable of taking the suffering of non-agents into account, then they cannot have obligations to non-human animals in virtue of the interests those non-human animals have in virtue of suffering. For beings like that, it is possible that non-human animals aren’t moral patients even while they are for agents like us. The question of how our agency informs which things are patients relative to us is an interesting one, but I set it aside here. My conclusions are intended to apply to agents like us and my claims about which things are patients relative to us.

  11. On some particular Objective List Views having consciousness will be a necessary condition for having a welfare. On such views, access to the objective goods is only possible for conscious beings. Even on such views, an individual’s welfare will not depend soley on his or her particular mental states.

  12. Assuming that an Objective-List view of welfare is true, nothing precludes there being other kinds of interests. It might be, for example, that being green is objectively good for an entity on some view. More plausibly, those that endorse what is often called the “capabilities approach” to well-being argue that certain objective features of a life, like having the resources to pursue projects and the freedom to do so, contribute to welfare independently of any attitudes a given individual may have (see (Sen 1993; Nussbaum and Martha 2001)). Another kind of Objective-List view is known as a dignity or integrity view. On such views, it is a component or constituent of welfare that a human or animal’s integrity or dignity be maintained or respected. Such views are often appealed to in arguments against the creation of transgenic organisms (Bovenkerk, Brom, and Van Den Bergh 2002; Gavrell Ortiz 2004). In this paper, I discuss only psychological and teleo interest. This is because while some components or constituents of welfare, such as those described in the capabilities approach are not strictly psychological, my arguments are intended to show that many of these components of welfare have as a precondition that an entity have certain psychological capacities, namely the capacity for attitudes. With respect to those components or constituents of welfare that do not have psychological capacities as a precondition, such as dignity- or integrity-based accounts of welfare, the arguments against the moral significance of teleo interests of machines apply equally well to these components of welfare. Furthermore, such accounts fail to meet the requirements of non-arbitrariness and non-derivativeness discussed below.

  13. This does not mean that there are no cases where we should favor one life over another. For example, if we must decide between saving the life of our own child and a stranger, we have good reason to save our child. However, this isn’t because our child is a more important moral patient. It has to do with the consequences, values, and relationships at stake.

  14. Whether the probability of creating consciousness unlike our own is high or low depends on how researchers attempt to create artificial consciousness. If scientists try to simulate human minds by creating functional replicas, then the consciousness created, if such a research program succeeds, is likely to be very much like our own. On the other hand, if scientists try to program or simulate consciousnesses that bear more resemblance to non-human animals or are completely novel, the probability is much higher.

  15. We must also determine how psychological interests of various kinds and strengths should be weighted when they come into conflict. However, since my concern is whether machines are patients at all, I do not address this issue.

  16. For further argument against this view, called Sensory Hedonism, see (Feldman 2004).

  17. Those that disagree will also be inclined to disagree about the relationship of teleo interests to welfare. Those that reject any non-mentalistic components to welfare will then agree with my assessment that mere machines are not moral patients.

  18. We learn more and more about child consciousness all the time, and so perhaps this is empirically false. However, we don’t need to know more about the consciousness of babies to know that they are moral patients.

  19. Thanks to reviewer 1 for these examples.

  20. Perhaps my failure to see how it could be good for such a machine to have an authentic existence just stems my failure to even imagine what it would be like to be a being with no attitudes but with concepts. At the very least, those who wish to disagree owe us an argument that having the concept of authenticity on a very permissive view of concepts is sufficient for having a psychological interests.

  21. I’m assuming that any individual that has the capacity for attitudes has at least one attitude about something.

  22. There is considerable controversy over which mental capacities non-human animals have. See (Tomasello and Call 1997) for a discussion of some of the issues concerning primate cognition. However, there is little doubt that many non-human animals have aversive attitudes towards those conditions we identify as painful. See (Varner 1998, chap. 2) for an overview of the evidence that non-human animals have the capacity for suffering.

  23. If we understand obligations as being a function of our epistemic context so that what’s obligatory and permissible is limited by what we can or can’t know, the argument is even easier to make. By ought implies can, we can only be obligated to take the psychological interests of machines into account if it were possible to determine what those interests were. But, given our current limitations we can’t make such a determination. Therefore, we are under no obligation whatsoever to take current machines into account and may permissibly behave as if they are mere machines.

  24. The alternative is to give up research involving machines. Until we have good reason to believe that we are creating the functional bases for consciousness, considering a ban on machine research seems overly restrictive.

  25. Important to the debates about the coherence of attributing interests to non-sentient organisms is the distinction between taking an interest and having an interest (See (Taylor 1989; Varner 1998; Basl and Sandler Forthcoming; Basl and Sandler Forthcoming) on the distinction). While taking an interest in X certainly would seem to require consciousness, since it implies caring or otherwise having some attitude about X, proponents of the interests of non-sentient organisms argue that something can have an interest in X, X can be good for that thing, independently of the interests it has. A common, though controversial, example might be an interest that smokers might have in giving up smoking independently of whether they actually care about doing so.

  26. It’s worth noting that proponents of views on which what’s good for non-sentient organisms are derivative on our interests can’t easily account for at least some ascriptions of interests to non-sentient organisms. For example, weed killer is instrumentally valuable for us precisely because it is bad for weeds. It would be strange to say that weed killer is good for weeds; it is good for killing weeds. However, this worry is not decisive; the way we talk is, at best, a starting point for thinking about these issues.

  27. All existing organisms will have interests of this kind, but sentient organisms will have additional interests.

  28. A similar account of the interests of non-sentient organisms can be found in (Varner 1998).

  29. This is a very brief summary of what is known as the etiologically account of functions (Wright 1973; Millikan 1989; Neander 1991; Millikan 1999; Neander 2008).

  30. Of course, not all traits are the result of selection. They may be the result of drift, or an evolutionary spandrel (Gould et al. 1979). Those that wish to ground teleology in natural selection need not be adaptationists.

  31. Whether past selection explains why a given organism has the trait that it does is a matter of some controversy. See for example (Sober 1984; Neander 1988; Forber 2005).

  32. For a discussion of these issues see (Basl and Sandler 2013a; Basl and Sandler 2013b).

  33. Another way to conceptualize this difference is as a difference in the nature of the selection processes that give rise to or explain organisms and artifacts. Artifacts are the result of artificial selection processes while organisms are the result of natural processes.

  34. For a more detailed discussion of this objection and others, as well as a more rigorous defense of the application of the etiological account of teleology to artifacts see (Basl and Sandler 2013a; Basl and Sandler 2013b).

  35. This may be true only if we also do what is necessary to maintain a machines capacity to serve that purpose. It is possible to be hard on machines. For example, we can brake too hard and too often in our car. In doing so, while we use the braking system for the end for which it was designed, we also undermine the brakes capacity to continue to serve that purpose. Thanks to Jeff Behrends for pushing me on this point.

  36. It is not that teleo interests are never relevant or that we never desire to promote them. When someone is in a coma, for example, the best we can do, often, is to help satisfy their teleo interests. However, this is not a case of a conflict between psychological and teleo interests.

  37. Thanks to Ron Sandler for pushing me on this point.

  38. Thanks to an anonymous reviewer for raising this issue.

  39. Of course, we have many reasons to take artifacts into account for other reasons. They may be other people’s property, they may be valuable scientific achievements, etc.

References

  • Basl, John, and Ronald Sandler (2013a). “The Good of Non-Sentient Entities: Organisms, Artifacts, and Synthetic Biology.” Studies in History and Philosophy of the Biological and Biomedical Sciences (in press).

  • Basl, John, and Ronald Sandler. (2013b). “Three Puzzles Regarding the Moral Status of Synthetic Organisms.” In “Artificial Life”: Synthetic Biology and the Bounds of Nature, ed. G. Kaebnick. Cambridge, MA: MIT Press (in press).

  • Behrends, J. (2011). A New Argument for the Multiplicity of the Good-for Relation. Journal of Value Inquiry, 45(2), 121–133.

    Article  Google Scholar 

  • Bovenkerk, B., Brom, F. W. A., & Van Den Bergh, B. J. (2002). Brave New Birds: The Use of ‘Animal Integrity’ in Animal Ethics. The Hastings Center Report, 32(1), 16–24.

    Article  Google Scholar 

  • Cahen, Harley (2002) “Against the Moral Considerability of Ecosystems.” In Environmental Ethics: An Anthology, ed. Andrew Light and H. Rolston III. Blackwell.

  • Feinberg, J. (1963). The Rights of Animals and Future Generations. Columbia Law Review, 63, 673.

    Google Scholar 

  • Feldman, F. (2004). Pleasure and the Good Life: Concerning the Nature, Varieties and Plausibility of Hedonism. USA: Oxford University Press.

    Book  Google Scholar 

  • Floridi, L. (2002). On the Intrinsic Value of Information Objects and the Infosphere. Ethics and Information Technology, 4(4), 287–304.

    Article  Google Scholar 

  • Forber, P. (2005). On the Explanatory Roles of Natural Selection. Biology and Philosophy, 20(2–3), 329–342.

    Article  Google Scholar 

  • Ortiz, G., & Elizabeth, S. (2004). Beyond Welfare: Animal Integrity, Animal Dignity, and Genetic Engineering. Ethics & the Environment, 9(1), 94–120.

    Google Scholar 

  • Goodpaster, K. (1978). On Being Morally Considerable. The Journal of Philosophy, 75, 308–325.

    Article  Google Scholar 

  • Gould, Stephen J., and Richard Lewontin (1979) “The Spandrels of San Marcos and the Panglossian Paradigm.” Optimizing Learning and Evolutionary Change in Behavior 153

  • Griffin, J. (1988). Well-Being: Its Meaning, Measurement, and Moral Importance. USA: Oxford University Press.

    Book  Google Scholar 

  • McMahan, Jeff (2009) Killing in War. 1st ed. OUP Oxford.

  • Millikan, R. G. (1989). In Defense of Proper Functions. Philosophy of Science, 56(2), 288–302.

    Article  Google Scholar 

  • Millikan, R. G. (1999). Wings, Spoons, Pills, and Quills: A Pluralist Theory of Function. The Journal of Philosophy, 96(4), 191–206.

    Article  Google Scholar 

  • Neander, K. (1988). What Does Natural Selection Explain? Correction to Sober. Philosophy of Science, 55, 422–426.

    Article  Google Scholar 

  • Neander, K. (1991). Functions as Selected Effects: The Conceptual Analyst’s Defense. Philosophy of Science, 58(2), 168–184.

    Article  Google Scholar 

  • Neander, Karen (2008) “The Teleological Notion of ‘Function’.” Australasian Journal of Philosophy 69 (4) (March 24): 454 – 468.

    Google Scholar 

  • Nozick, R. (1974). Anarchy, State, and Utopia. New York: Basic Books.

    Google Scholar 

  • Nussbaum, Martha C (2001). Women and Human Development: The Capabilities Approach. Cambridge University Press.

  • O’Neill, John (2003) “The Varieties of Intrinsic Value.” In Environmental Ethics: An Anthology, ed. Holmes III Rolston and Andrew Light.

  • Rosati, C. S. (2009). Relational Good and the Multiplicity Problem1. Philosophical Issues, 19(1), 205–234.

    Article  Google Scholar 

  • Sandler, Ronald (2007) Character and Environment: A Virtue-Oriented Approach to Environmental Ethics. Columbia University Press.

  • Sandler, R., & Simons, L. (2012). The Value of Artefactual Organisms. Environmental Values, 21(1), 43–61.

    Article  Google Scholar 

  • Sen, Amartya (1993) “Capability and Well-Being.” In The Quality of Life, ed. Amartya Sen and Martha Nussbaum, 1:30–54. Oxford University Press.

  • Singer, Peter (2009) Animal Liberation: The Definitive Classic of the Animal Movement. Reissue. Harper Perennial Modern Classics.

  • Sober, E. (1984). The Nature of Selection. MA: MIT Press Cambridge.

    Google Scholar 

  • Streiffer, Robert, and John Basl (2011) “Applications of Biotechnology to Animals in Agriculture.” In The Oxford Handbook of Animal Ethics, ed. T Beauchamp and R Frey. Oxford.

  • Taylor, P. W. (1989). Respect for Nature. Studies in Moral, Political, and Legal Philosophy. Princeton, N.J.: Princeton University Press.

    Google Scholar 

  • Tomasello, M., & Call, J. (1997). Primate Cognition (1st ed.). USA: Oxford University Press.

    Google Scholar 

  • Varner, G. (1998). In Nature’s Interest. Oxford: Oxford University Press.

    Google Scholar 

  • Wright, L. (1973). Functions. Philosophical Review, 82, 139–168.

    Article  Google Scholar 

Download references

Acknowledgments

Basl, J., “The Moral Status of Artificial Intelligences” Ethics and Emerging Technologies, Sandler, R. (ed.), Palgrave-Macmillan, Forthcoming. I would also like thank Ronald Sandler, Joanna Bryson, David Gunkel, and participants of The Machine Question Symposium, as well as the two anonymous referees for helpful comments and questions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to John Basl.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Basl, J. Machines as Moral Patients We Shouldn’t Care About (Yet): The Interests and Welfare of Current Machines. Philos. Technol. 27, 79–96 (2014). https://doi.org/10.1007/s13347-013-0122-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13347-013-0122-y

Keywords

Navigation