Abstract
In this paper I provide an exposition and critique of the Organic View of Ethical Status, as outlined by Torrance (2008). A key presupposition of this view is that only moral patients can be moral agents. It is claimed that because artificial agents lack sentience, they cannot be proper subjects of moral concern (i.e. moral patients). This account of moral standing in principle excludes machines from participating in our moral universe. I will argue that the Organic View operationalises anthropocentric intuitions regarding sentience ascription, and by extension how we identify moral patients. The main difference between the argument I provide here and traditional arguments surrounding moral attributability is that I do not necessarily defend the view that internal states ground our ascriptions of moral patiency. This is in contrast to views such as those defended by Singer (1975, 2011) and Torrance (2008), where concepts such as sentience play starring roles. I will raise both conceptual and epistemic issues with regards to this sense of sentience. While this does not preclude the usage of sentience outright, it suggests that we should be more careful in our usage of internal mental states to ground our moral ascriptions. Following from this I suggest other avenues for further exploration into machine moral patiency which may not have the same shortcomings as the Organic View.
This is a preview of subscription content, access via your institution.
Notes
A patient-orientated approach to ethics is not concerned with the perpetrator of a specific action, but rather attempts to zero in on the victim or receiver of the action (Floridi, 1999). This type of approach to ethics is considered non-standard and has been incredibly influential in both the “animal liberation” movement and “deep ecology” approaches to environmentalism (see Leopold, 1948; Naess, 1973; Singer, 1975, 2011). Both place an emphasis on the victims of moral harms; in the case of animal liberation, the harm we do to animals, and in the case of deep ecology the harm we do to the environment.
An artificial agent is artificial in the sense that it has been manufactured by intentional agents (us) out of pre-existing materials, which are external to the manufacturers themselves (Himma, 2009). It is an agent in the sense that it is capable of performing actions (Floridi and Sanders, 2004: 349). An easy example of such an artificial agent would be a cellphone, as it is manufactured by humans and can perform actions, such as basic arithmetic functions or responding to queries via online searches.
Gunkel (2012: 5) considers the “machine question” to be the flip side of the “animal question”: both concern the moral standing of non-human entities.
Sentience can be understood as the capacity for an entity to have phenomenal/subjective/qualitative states of experience (Bostrom and Yudkowsky, 2011: 7).
For the sake of argument, I focus here on the experience of pain, but logically it would be possible to subject any type of internal mental state to the same type of analysis. Any theory which posits an “experience of X” claim must eventually answer to the question of who or what (i.e. what type of mind) is experiencing, or capable of experiencing, X.
Torrance does not believe that functionalist accounts of mind fully capture the qualitative aspects of experience. He thus believes in the metaphysical possibility of “philosophical zombies”; humans which look and behave indistinguishably from us but lack phenomenal conscious states of experience (Torrance, 2008). This is a thorny philosophical issue in its own right, but I will not go into further detail here.
Phenomenal in the sense of having the capacity for conscious awareness. When applied to his argument for moral status, however, Torrance does not require that the entity in question be self-aware, only sentient (2008: 503).
My own view is that there is in fact no difference between what can be “functionally” known about the mind and “phenomenal” aspects of mind: the phenomenal is just a special case of the functional, and in this way, there is no “hard problem” of consciousness. See Chalmers (1996) for a defense of the hard problem, and Cohen and Dennett (2011) for a substantive critique.
That is, unfair moral discrimination based on the temperature of an entity’s blood.
Torrance does address this issue (2014) and refers to the view that I broadly defend in this paper as “social relationism” (SR). Torrance claims that SR positions do not offer us “inherently right or wrong answers” when it comes to questions of moral patiency (2014: 12). I think this a somewhat superficial reading of SR approaches, but it is beyond the scope of this paper to go into any detail in this regard, as my focus here is concerned with the specific claims made by Torrance with regards to the criteria of moral status specifically, not realism versus social relationism more generally.
My decision to make use of the intentional stance is far from uncontroversial. Dennett believes that a third-person, materialistic starting point is the most appropriate one for further investigations into mentalistic concepts. This, however, can be contested on various grounds. See, for example, Nagel (1986), Ratcliffe (2001) and Slors (1996, 2015) for various philosophical issues with Dennett’s account. It is far beyond the scope of the present paper to resolve these and other problems with Dennett’s theory. For my purposes, however, what matters is that social-relational accounts can be amended with a theory which accounts for mental states, the details of which would still need to be worked out.
These could be signs that are indicative of suffering, for example vocalizations (sighing or moaning), facial expressions (grimacing, frowning, rapid blinking, etc.) or bodily movement (being hunched over, exterior rigidity, etc.).
For a critique of the Moral Turing Test, see Arnold and Scheutz (2016).
Also see Wallach and Allen (2009: 70) for an exposition of the comparative Moral Turing Test (cMMT), which asks “which of these agents is less moral than the other?”, as opposed to the question of which entity is the artificial agent, posed in the MTT.
A situation in which a choice must be made as to which of two human lives to save.
Another arena requiring further research is the use and distribution of “entertainment” robots (Royakkers and van Est, 2015). More specifically, sex robots, which raise questions concerning the role of consent and ownership, and how (if it all) these concepts refer in this case. If we concede that such robots are AAs, can they give meaningful consent? Moreover, can we legitimately speak of acts such as “robotic rape”, and punish those performing such acts (see Danaher, 2017a)? More work needs to be done at both the philosophical and regulatory levels to unpack solutions to these and other questions.
Reference
Arnold, T., & Scheutz, M. (2016). ‘Against the moral Turing test: accountable design and the moral reasoning of autonomous systems. Ethics and Information Technology, 18(2), 103–115. https://doi.org/10.1007/s10676-016-9389-x.
Bostrom, N., & Yudkowsky, E. (2011). The ethics of artificial intelligence. In K. Frankish (Ed.), The Cambridge handbook of artificial intelligence. Cambridge: Cambridge University Press.
Brown, C. (2015). Fish intelligence, sentience and ethics. Animal Cognition, 18(1), 1–17. https://doi.org/10.1007/s10071-014-0761-0.
Chalmers, D. J. (1996). The conscious mind. Oxford: Oxford University Press.
Champagne, M., & Tonkens, R. (2013). Bridging the responsibility gap. Philosophy and Technology, 28(1), 125–137.
Coeckelbergh, M. (2010a). Moral appearances: Emotions, robots, and human morality. Ethics and Information Technology, 12(3), 235–241. https://doi.org/10.1007/s10676-010-9221-y.
Coeckelbergh, M. (2010b). Robot rights? Towards a social-relational justification of moral consideration. Ethics and Information Technology, 12(3), 209–221. https://doi.org/10.1007/s10676-010-9235-5.
Coeckelbergh, M. (2014). The moral standing of machines: Towards a relational and non-cartesian moral hermeneutics. Philosophy & Technology, 27, 61–77. https://doi.org/10.1007/s13347-013-0133-8.
Cohen, M. A., & Dennett, D. C. (2011). Consciousness cannot be separated from function. Trends in Cognitive Sciences, 15(8), 358–364. https://doi.org/10.1016/j.tics.2011.06.008.
Danaher, J. (2017a). Robotic rape and robotic child sexual abuse: Should they be criminalised? Criminal Law and Philosophy, 11(1), 71–95. https://doi.org/10.1007/s11572-014-9362-x.
Danaher, J. (2017b). The rise of the robots and the crisis of moral patiency. AI and Society. https://doi.org/10.1007/s00146-017-0773-9.
Dennett, D. (2009). Intentional systems theory. In The Oxford handbook of philosophy of mind, (Dennett) (pp. 1–22). https://doi.org/10.1093/oxfordhb/9780199262618.003.0020.
Dennett, D. C. (1989). The intentional stance. Cambridge, Massachusetts: MIT Press. https://doi.org/10.1017/S0140525X00058611.
Dennett, D. C. (1996). Kinds of minds: Toward an understanding of consciousness. New York: Basic Books.
Floridi, L. (1999). Information ethics: On the philosophical foundation of computer ethics. Ethics and Information Technology, 1, 37–56. https://doi.org/10.1023/A:1010018611096.
Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machine, 14, 349–379. https://doi.org/10.2139/ssrn.1124296.
Gerdes, A., & Øhrstrøm, P. (2015). Issues in robot ethics seen through the lens of a moral turing test. Journal of Information, Communication and Ethics in Society, 13(2), 98–109. https://doi.org/10.1108/JICES-09-2014-0038.
Gunkel, D. J. (2012). The machine question. London: MIT Press.
Gunkel, D. J. (2017). Mind the gap: responsible robotics and the problem of responsibility. Ethics and Information Technology. https://doi.org/10.1007/s10676-017-9428-2.
Himma, K. E. (2009). Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? Ethics and Information Technology, 11(1), 19–29. https://doi.org/10.1007/s10676-008-9167-5.
Johansson, L. (2010). The functional morality of robots. International Journal of Technoethics, 1(4), 65–73.
Johnson, D. G. (2015). Technology with no human responsibility? Journal of Business Ethics. https://doi.org/10.1007/s10551-014-2180-1.
Johnson, D. G., & Miller, K. W. (2008). Un-making artificial moral agents. Ethics and Information Technology, 10(2–3), 123–133. https://doi.org/10.1007/s10676-008-9174-6.
Johnson, D. G., & Noorman, M. (2014). Artefactual agency and artefactual moral agency. In P. Kroes & P.-P. Verbeek (Eds.), The moral status of technical artefacts (pp. 143–158). New York: Springer.
Leopold, A. (1948) A land ethic. In A sand county almanac with essays on conservation from Round River. New York: Oxford University Press.
Müller, V. C. (2014). Autonomous killer robots are probably good news. Frontiers in Artificial Intelligence and Applications, 273, 297–305. https://doi.org/10.3233/978-1-61499-480-0-297.
Naess, A. (1973). The shallow and the deep long-range ecology movements. Inquiry, 16, 95–100.
Nagel, T. (1986). The view from nowhere. New York: Oxford University Press. https://doi.org/10.2307/2108026.
Nyholm, S. (2017). Attributing agency to automated systems: Reflections on human-robot collaborations and responsibility-loci’. Science and Engineering Ethics. https://doi.org/10.1007/s11948-017-9943-x.
Powers, T. M. (2013). On the moral agency of computers. Topoi, 32(2), 227–236. https://doi.org/10.1007/s11245-012-9149-4.
Ratcliffe, M. (2001). A kantian stance on the intentional stance. Biology and Philosophy, 16(1), 29–52. https://doi.org/10.1023/A:1006710821443.
Ritchie, J. (2008). Understanding naturalism. Stocksfield: Acumen.
Royakkers, L., & van Est, R. (2015). A literature review on new robotics: Automation from love to war. International Journal of Social Robotics., 7(5), 549–570. https://doi.org/10.1007/s12369-015-0295-x.
Singer, P. (1975). Animal liberation: A new ethics for our treatment of animals. New York: New York Review of Books.
Singer, P. (2011). The expanding circle: Ethics, evolution and moral progress. New Jersey: Princetown University Press.
Slors, M. (1996). Why Dennett cannot explain what it is to adopt the intentional stance. The Philosophical Quarterly, 46(182), 93–98.
Slors, M. (2015). Two improvements to the intentional stance theory: Hutto and Satne on naturalizing content. Philosophia (United States), 43(3), 579–591. https://doi.org/10.1007/s11406-015-9627-1.
Sparrow, R. (2004). The Turing triage test. Ethics and Information Technology, 6(4), 203–213. https://doi.org/10.1007/s10676-004-6491-2.
Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–78. https://doi.org/10.1111/j.1468-5930.2007.00346.x.
Stich, S. P. (1981). Dennett on intentional systems. Functionalism and the Philosophy of Mind, 12(1), 39–62.
Sullins, J. P. (2011). When is a robot a moral agent? Machine Ethics, 6(2001), 151–161. https://doi.org/10.1017/CBO9780511978036.021.
Torrance, S. (2007). Two conceptions of machine phenomenality. Journal of Consciousness Studies, 14(7), 154–166.
Torrance, S. (2008). Ethics and consciousness in artificial agents. AI and Society, 22(4), 495–521. https://doi.org/10.1007/s00146-007-0091-8.
Torrance, S. (2013). Artificial agents and the expanding ethical circle. AI and Society, 28(4), 399–414. https://doi.org/10.1007/s00146-012-0422-2.
Torrance, S. (2014). Artificial consciousness and artificial ethics: Between realism and social relationism. Philosophy and Technology, 27(1), 9–29. https://doi.org/10.1007/s13347-013-0136-5.
Wallach, W., & Allen, C. (2009). Moral machines. New York: Oxford University Press.
Wareham, C. (2011). On the moral equality of artificial agents. International Journal of Technoethics, 2(1), 35–42. https://doi.org/10.4018/jte.2011010103.
Yu, P., & Fuller, G. (1986). A critique of Dennett. Synthese, 66(3), 453–476.
Acknowledgements
I would like to thank my supervisor and mentor Tanya de Villiers-Botha for her insightful comments and guidance. I am also indebted to Deryck Hougaard and Lize Alberts who read earlier drafts of this paper and provided very useful feedback.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Tollon, F. The artificial view: toward a non-anthropocentric account of moral patiency. Ethics Inf Technol 23, 147–155 (2021). https://doi.org/10.1007/s10676-020-09540-4
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10676-020-09540-4
Keywords
- Machine moral patiency
- Sentience
- Anthropocentrism
- Intentional stance
- Organic view of ethical status