The artificial view: toward a non-anthropocentric account of moral patiency

Abstract

In this paper I provide an exposition and critique of the Organic View of Ethical Status, as outlined by Torrance (2008). A key presupposition of this view is that only moral patients can be moral agents. It is claimed that because artificial agents lack sentience, they cannot be proper subjects of moral concern (i.e. moral patients). This account of moral standing in principle excludes machines from participating in our moral universe. I will argue that the Organic View operationalises anthropocentric intuitions regarding sentience ascription, and by extension how we identify moral patients. The main difference between the argument I provide here and traditional arguments surrounding moral attributability is that I do not necessarily defend the view that internal states ground our ascriptions of moral patiency. This is in contrast to views such as those defended by Singer (1975, 2011) and Torrance (2008), where concepts such as sentience play starring roles. I will raise both conceptual and epistemic issues with regards to this sense of sentience. While this does not preclude the usage of sentience outright, it suggests that we should be more careful in our usage of internal mental states to ground our moral ascriptions. Following from this I suggest other avenues for further exploration into machine moral patiency which may not have the same shortcomings as the Organic View.

This is a preview of subscription content, access via your institution.

Notes

  1. 1.

    A patient-orientated approach to ethics is not concerned with the perpetrator of a specific action, but rather attempts to zero in on the victim or receiver of the action (Floridi, 1999). This type of approach to ethics is considered non-standard and has been incredibly influential in both the “animal liberation” movement and “deep ecology” approaches to environmentalism (see Leopold, 1948; Naess, 1973; Singer, 1975, 2011). Both place an emphasis on the victims of moral harms; in the case of animal liberation, the harm we do to animals, and in the case of deep ecology the harm we do to the environment.

  2. 2.

    An artificial agent is artificial in the sense that it has been manufactured by intentional agents (us) out of pre-existing materials, which are external to the manufacturers themselves (Himma, 2009). It is an agent in the sense that it is capable of performing actions (Floridi and Sanders, 2004: 349). An easy example of such an artificial agent would be a cellphone, as it is manufactured by humans and can perform actions, such as basic arithmetic functions or responding to queries via online searches.

  3. 3.

    Gunkel (2012: 5) considers the “machine question” to be the flip side of the “animal question”: both concern the moral standing of non-human entities.

  4. 4.

    Sentience can be understood as the capacity for an entity to have phenomenal/subjective/qualitative states of experience (Bostrom and Yudkowsky, 2011: 7).

  5. 5.

    For the sake of argument, I focus here on the experience of pain, but logically it would be possible to subject any type of internal mental state to the same type of analysis. Any theory which posits an “experience of X” claim must eventually answer to the question of who or what (i.e. what type of mind) is experiencing, or capable of experiencing, X.

  6. 6.

    Torrance does not believe that functionalist accounts of mind fully capture the qualitative aspects of experience. He thus believes in the metaphysical possibility of “philosophical zombies”; humans which look and behave indistinguishably from us but lack phenomenal conscious states of experience (Torrance, 2008). This is a thorny philosophical issue in its own right, but I will not go into further detail here.

  7. 7.

    Phenomenal in the sense of having the capacity for conscious awareness. When applied to his argument for moral status, however, Torrance does not require that the entity in question be self-aware, only sentient (2008: 503).

  8. 8.

    My own view is that there is in fact no difference between what can be “functionally” known about the mind and “phenomenal” aspects of mind: the phenomenal is just a special case of the functional, and in this way, there is no “hard problem” of consciousness. See Chalmers (1996) for a defense of the hard problem, and Cohen and Dennett (2011) for a substantive critique.

  9. 9.

    That is, unfair moral discrimination based on the temperature of an entity’s blood.

  10. 10.

    Torrance does address this issue (2014) and refers to the view that I broadly defend in this paper as “social relationism” (SR). Torrance claims that SR positions do not offer us “inherently right or wrong answers” when it comes to questions of moral patiency (2014: 12). I think this a somewhat superficial reading of SR approaches, but it is beyond the scope of this paper to go into any detail in this regard, as my focus here is concerned with the specific claims made by Torrance with regards to the criteria of moral status specifically, not realism versus social relationism more generally.

  11. 11.

    My decision to make use of the intentional stance is far from uncontroversial. Dennett believes that a third-person, materialistic starting point is the most appropriate one for further investigations into mentalistic concepts. This, however, can be contested on various grounds. See, for example, Nagel (1986), Ratcliffe (2001) and Slors (1996, 2015) for various philosophical issues with Dennett’s account. It is far beyond the scope of the present paper to resolve these and other problems with Dennett’s theory. For my purposes, however, what matters is that social-relational accounts can be amended with a theory which accounts for mental states, the details of which would still need to be worked out.

  12. 12.

    These could be signs that are indicative of suffering, for example vocalizations (sighing or moaning), facial expressions (grimacing, frowning, rapid blinking, etc.) or bodily movement (being hunched over, exterior rigidity, etc.).

  13. 13.

    For a critique of the Moral Turing Test, see Arnold and Scheutz (2016).

  14. 14.

    Also see Wallach and Allen (2009: 70) for an exposition of the comparative Moral Turing Test (cMMT), which asks “which of these agents is less moral than the other?”, as opposed to the question of which entity is the artificial agent, posed in the MTT.

  15. 15.

    A situation in which a choice must be made as to which of two human lives to save.

  16. 16.

    Another arena requiring further research is the use and distribution of “entertainment” robots (Royakkers and van Est, 2015). More specifically, sex robots, which raise questions concerning the role of consent and ownership, and how (if it all) these concepts refer in this case. If we concede that such robots are AAs, can they give meaningful consent? Moreover, can we legitimately speak of acts such as “robotic rape”, and punish those performing such acts (see Danaher, 2017a)? More work needs to be done at both the philosophical and regulatory levels to unpack solutions to these and other questions.

Reference

  1. Arnold, T., & Scheutz, M. (2016). ‘Against the moral Turing test: accountable design and the moral reasoning of autonomous systems. Ethics and Information Technology,18(2), 103–115. https://doi.org/10.1007/s10676-016-9389-x.

    Article  Google Scholar 

  2. Bostrom, N., & Yudkowsky, E. (2011). The ethics of artificial intelligence. In K. Frankish (Ed.), The Cambridge handbook of artificial intelligence. Cambridge: Cambridge University Press.

    Google Scholar 

  3. Brown, C. (2015). Fish intelligence, sentience and ethics. Animal Cognition,18(1), 1–17. https://doi.org/10.1007/s10071-014-0761-0.

    Article  Google Scholar 

  4. Chalmers, D. J. (1996). The conscious mind. Oxford: Oxford University Press.

    Google Scholar 

  5. Champagne, M., & Tonkens, R. (2013). Bridging the responsibility gap. Philosophy and Technology,28(1), 125–137.

    Article  Google Scholar 

  6. Coeckelbergh, M. (2010a). Moral appearances: Emotions, robots, and human morality. Ethics and Information Technology,12(3), 235–241. https://doi.org/10.1007/s10676-010-9221-y.

    Article  Google Scholar 

  7. Coeckelbergh, M. (2010b). Robot rights? Towards a social-relational justification of moral consideration. Ethics and Information Technology,12(3), 209–221. https://doi.org/10.1007/s10676-010-9235-5.

    Article  Google Scholar 

  8. Coeckelbergh, M. (2014). The moral standing of machines: Towards a relational and non-cartesian moral hermeneutics. Philosophy & Technology,27, 61–77. https://doi.org/10.1007/s13347-013-0133-8.

    Article  Google Scholar 

  9. Cohen, M. A., & Dennett, D. C. (2011). Consciousness cannot be separated from function. Trends in Cognitive Sciences,15(8), 358–364. https://doi.org/10.1016/j.tics.2011.06.008.

    Article  Google Scholar 

  10. Danaher, J. (2017a). Robotic rape and robotic child sexual abuse: Should they be criminalised? Criminal Law and Philosophy,11(1), 71–95. https://doi.org/10.1007/s11572-014-9362-x.

    Article  Google Scholar 

  11. Danaher, J. (2017b). The rise of the robots and the crisis of moral patiency. AI and Society. https://doi.org/10.1007/s00146-017-0773-9.

    Article  Google Scholar 

  12. Dennett, D. (2009). Intentional systems theory. In The Oxford handbook of philosophy of mind, (Dennett) (pp. 1–22). https://doi.org/10.1093/oxfordhb/9780199262618.003.0020.

  13. Dennett, D. C. (1989). The intentional stance. Cambridge, Massachusetts: MIT Press. https://doi.org/10.1017/S0140525X00058611.

    Google Scholar 

  14. Dennett, D. C. (1996). Kinds of minds: Toward an understanding of consciousness. New York: Basic Books.

    Google Scholar 

  15. Floridi, L. (1999). Information ethics: On the philosophical foundation of computer ethics. Ethics and Information Technology,1, 37–56. https://doi.org/10.1023/A:1010018611096.

    Article  Google Scholar 

  16. Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machine,14, 349–379. https://doi.org/10.2139/ssrn.1124296.

    Article  Google Scholar 

  17. Gerdes, A., & Øhrstrøm, P. (2015). Issues in robot ethics seen through the lens of a moral turing test. Journal of Information, Communication and Ethics in Society,13(2), 98–109. https://doi.org/10.1108/JICES-09-2014-0038.

    Article  Google Scholar 

  18. Gunkel, D. J. (2012). The machine question. London: MIT Press.

    Google Scholar 

  19. Gunkel, D. J. (2017). Mind the gap: responsible robotics and the problem of responsibility. Ethics and Information Technology. https://doi.org/10.1007/s10676-017-9428-2.

    Article  Google Scholar 

  20. Himma, K. E. (2009). Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? Ethics and Information Technology,11(1), 19–29. https://doi.org/10.1007/s10676-008-9167-5.

    Article  Google Scholar 

  21. Johansson, L. (2010). The functional morality of robots. International Journal of Technoethics,1(4), 65–73.

    Article  Google Scholar 

  22. Johnson, D. G. (2015). Technology with no human responsibility? Journal of Business Ethics. https://doi.org/10.1007/s10551-014-2180-1.

    Article  Google Scholar 

  23. Johnson, D. G., & Miller, K. W. (2008). Un-making artificial moral agents. Ethics and Information Technology,10(2–3), 123–133. https://doi.org/10.1007/s10676-008-9174-6.

    Article  Google Scholar 

  24. Johnson, D. G., & Noorman, M. (2014). Artefactual agency and artefactual moral agency. In P. Kroes & P.-P. Verbeek (Eds.), The moral status of technical artefacts (pp. 143–158). New York: Springer.

    Google Scholar 

  25. Leopold, A. (1948) A land ethic. In A sand county almanac with essays on conservation from Round River. New York: Oxford University Press.

  26. Müller, V. C. (2014). Autonomous killer robots are probably good news. Frontiers in Artificial Intelligence and Applications,273, 297–305. https://doi.org/10.3233/978-1-61499-480-0-297.

    Article  Google Scholar 

  27. Naess, A. (1973). The shallow and the deep long-range ecology movements. Inquiry,16, 95–100.

    Article  Google Scholar 

  28. Nagel, T. (1986). The view from nowhere. New York: Oxford University Press. https://doi.org/10.2307/2108026.

    Google Scholar 

  29. Nyholm, S. (2017). Attributing agency to automated systems: Reflections on human-robot collaborations and responsibility-loci’. Science and Engineering Ethics. https://doi.org/10.1007/s11948-017-9943-x.

    Article  Google Scholar 

  30. Powers, T. M. (2013). On the moral agency of computers. Topoi,32(2), 227–236. https://doi.org/10.1007/s11245-012-9149-4.

    Article  Google Scholar 

  31. Ratcliffe, M. (2001). A kantian stance on the intentional stance. Biology and Philosophy,16(1), 29–52. https://doi.org/10.1023/A:1006710821443.

    Article  Google Scholar 

  32. Ritchie, J. (2008). Understanding naturalism. Stocksfield: Acumen.

    Google Scholar 

  33. Royakkers, L., & van Est, R. (2015). A literature review on new robotics: Automation from love to war. International Journal of Social Robotics.,7(5), 549–570. https://doi.org/10.1007/s12369-015-0295-x.

    Article  Google Scholar 

  34. Singer, P. (1975). Animal liberation: A new ethics for our treatment of animals. New York: New York Review of Books.

    Google Scholar 

  35. Singer, P. (2011). The expanding circle: Ethics, evolution and moral progress. New Jersey: Princetown University Press.

    Google Scholar 

  36. Slors, M. (1996). Why Dennett cannot explain what it is to adopt the intentional stance. The Philosophical Quarterly,46(182), 93–98.

    Article  Google Scholar 

  37. Slors, M. (2015). Two improvements to the intentional stance theory: Hutto and Satne on naturalizing content. Philosophia (United States),43(3), 579–591. https://doi.org/10.1007/s11406-015-9627-1.

    Article  Google Scholar 

  38. Sparrow, R. (2004). The Turing triage test. Ethics and Information Technology,6(4), 203–213. https://doi.org/10.1007/s10676-004-6491-2.

    Article  Google Scholar 

  39. Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy,24(1), 62–78. https://doi.org/10.1111/j.1468-5930.2007.00346.x.

    Article  Google Scholar 

  40. Stich, S. P. (1981). Dennett on intentional systems. Functionalism and the Philosophy of Mind,12(1), 39–62.

    Google Scholar 

  41. Sullins, J. P. (2011). When is a robot a moral agent? Machine Ethics,6(2001), 151–161. https://doi.org/10.1017/CBO9780511978036.021.

    Article  Google Scholar 

  42. Torrance, S. (2007). Two conceptions of machine phenomenality. Journal of Consciousness Studies,14(7), 154–166.

    Google Scholar 

  43. Torrance, S. (2008). Ethics and consciousness in artificial agents. AI and Society,22(4), 495–521. https://doi.org/10.1007/s00146-007-0091-8.

    Article  Google Scholar 

  44. Torrance, S. (2013). Artificial agents and the expanding ethical circle. AI and Society,28(4), 399–414. https://doi.org/10.1007/s00146-012-0422-2.

    Article  Google Scholar 

  45. Torrance, S. (2014). Artificial consciousness and artificial ethics: Between realism and social relationism. Philosophy and Technology,27(1), 9–29. https://doi.org/10.1007/s13347-013-0136-5.

    Article  Google Scholar 

  46. Wallach, W., & Allen, C. (2009). Moral machines. New York: Oxford University Press.

    Google Scholar 

  47. Wareham, C. (2011). On the moral equality of artificial agents. International Journal of Technoethics,2(1), 35–42. https://doi.org/10.4018/jte.2011010103.

    Article  Google Scholar 

  48. Yu, P., & Fuller, G. (1986). A critique of Dennett. Synthese,66(3), 453–476.

    Article  Google Scholar 

Download references

Acknowledgements

I would like to thank my supervisor and mentor Tanya de Villiers-Botha for her insightful comments and guidance. I am also indebted to Deryck Hougaard and Lize Alberts who read earlier drafts of this paper and provided very useful feedback.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Fabio Tollon.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Tollon, F. The artificial view: toward a non-anthropocentric account of moral patiency. Ethics Inf Technol (2020). https://doi.org/10.1007/s10676-020-09540-4

Download citation

Keywords

  • Machine moral patiency
  • Sentience
  • Anthropocentrism
  • Intentional stance
  • Organic view of ethical status