Skip to main content
Log in

Ethics and consciousness in artificial agents

  • Original Article
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

In what ways should we include future humanoid robots, and other kinds of artificial agents, in our moral universe? We consider the Organic view, which maintains that artificial humanoid agents, based on current computational technologies, could not count as full-blooded moral agents, nor as appropriate targets of intrinsic moral concern. On this view, artificial humanoids lack certain key properties of biological organisms, which preclude them from having full moral status. Computationally controlled systems, however advanced in their cognitive or informational capacities, are, it is proposed, unlikely to possess sentience and hence will fail to be able to exercise the kind of empathic rationality that is a prerequisite for being a moral agent. The organic view also argues that sentience and teleology require biologically based forms of self-organization and autonomous self-maintenance. The organic view may not be correct, but at least it needs to be taken seriously in the future development of the field of Machine Ethics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Future decision-makers may wrongly view certain kinds of artificial beings as having ethical status when they do not merit this (for instance as being sentient creatures with needs for certain kinds of benevolent treatment when they actually do not have sentient states). If there are vast numbers of such beings this could involve a massive diversion of resources from humans who could have greatly benefited from them. The converse may be the case: conscious, feeling, artificial beings may be mistakenly dismissed as non-sentient tools, so that their genuine needs (which, arguably, in this scenario, ethically ought to be given consideration) may be disregarded. In either scenario a great moral wrong could be committed, in the one case against humans, and in the other against humanoids—and more so as the numbers of such artificial beings are imagined to increase.

  2. See Torrance (2000) for a defence of the position that a strong distinction should be drawn between cognitive and phenomenological properties of mind. It is argued that mentality may not be a composite field: some types of mental property may be computational, while others may require an inherently biological basis. Thus, AI agents could be the subjects of genuine psychological attributions of a cognitive kind, while it may be that psychological attributions requiring subjective or conscious states could not apply to such agents merely in virtue of their computational features.

  3. Many within the AI community would strongly support the view that computational agents could in principle be fully conscious. See the papers assembled in Holland (2003), for example. For some doubts on that view see Torrance (1986, 2004, 2007).

  4. The use of the term ‘owner’ in the last example is itself a telling sign of the difference in ethical status between humans and humanoids. The UN Declaration of Human Rights is designed, among other things, to outlaw ownership by one human of another. A prohibition of ownership of humanoid robots by humans is unlikely to be agreed upon for a considerable time, if ever.

  5. The notion of sentience should be distinguished from that of self-consciousness: many beings, which possess the former may not possess the latter. Arguably, many mammals possess sentience, or phenomenal consciousness—they are capable of feeling pain, fear, sensuous pleasure and so on. Nevertheless it is usually taken that such mammals do not standardly possess the ability to articulate or be aware, in a higher-order way, of such sentient states, and so lack self-consciousness.

  6. There are even more extreme versions of the organic view, which assert one or both of the following: (A) Only organic beings could be subjects of any psychological states. (B) Only naturally occurring organic beings could be subjects of sentient (or other psychological states). I have resisted taking either of these more extreme positions. Reasons for doubting (A), and for allowing that some types of cognitive mental states could be applied in a full psychological sense to computational systems, will be found in Torrance (2000). As for (B), this rules out the possibility of creating complete artificial replications of biological organisms via any means whatsoever—including via massive molecular-level engineering.

  7. Many would argue that this commits the ‘fact-value’ fallacy—by attempting to derive moral values (concerning how someone’s experiential states are to be treated) from morally neutral factual statements (about the existence of those experiences themselves). To this I would reply that the assertion of a fact-value dichotomy has been widely challenged; and if there is any area where it seems to be most susceptible to challenge it is this—the relation between statements about experience and moral commitments towards such experiences. In any case the alleged fact-value dichotomy concerns a supposed logical or deductive gap between the two; and the kind of commitment, which may be in question here might be some other kind of commitment—for instance a fundamental moral commitment or a non-deductive commitment of rationality. Also the views about the structure of thought and language presupposed by the fact-value distinction (a neat division into ‘factual’ statements and ‘evaluative’ prescriptions), while perhaps reasonable as a theoretical construct, may be quite inadequate to characterizing the concrete reality of moral and psychological thinking.

  8. To describe you as a moral source with respect to me, is to describe you as having a moral commitment to assist me. If you are a moral target with respect to me, then I have a moral commitment to assist you.

  9. It should be noted that there is another sense of ‘moral target’ in which someone could be a target of moral praise, recrimination, etc. In this sense of ‘target’, an agent could be a target of negative appraisal by doing wrong things or by failing to fulfil their responsibilities. Similarly an agent could be a target of positive moral appraisal if it does good or right things. However the sense of ‘target’ we are using in the discussion above is different. In the sense under discussion in the text one is a moral target if one is potentially an object of moral concern because of how one is, not because of what one has done or failed to do.

  10. It should be noted that the case under consideration is not that where we have to prioritize between rescuing humans and retrieving valuable or irreplaceable equipment. Here there may be difficult choices to be made—for instance we may need to divert time from attending to human need in order to rescue an important piece of medical equipment which can save many lives if retained intact. The case under consideration is rather, that the non-sentient creatures are given some priority, not because of their utility to sentient creatures, but because, despite being recognized as being non-sentient, they have calls on our moral concern. It is not clear how many people would seriously take that as a nettle worth grasping.

  11. For an extended discussion of the short story by Isaac Asimov on which this film is based, see Anderson, this issue.

  12. In the movie, it is left rather indeterminate as to whether Andrew is to be considered as being phenomenally conscious or just functionally so—at least at the stage in the story where he acquires property-owning status. The force of the example is not unduly reduced by imagining a definitely non-sentient robot in Andrew’s situation.

  13. It might be said that all rationality involves at least some affective components, and that this strongly limits the kinds of rationality—or indeed intelligence—that could be implemented using current methods in AI. (For key discussion on this see Picard 1997) I am not taking a view on this—I am concerned to concede as much to the viewpoint being opposed as possible—including that there can be purely cognitive forms of rationality.

  14. See additionally the recent synthesizing discussions by Weber and Varela (2002); also Thompson (2004) and Di Paolo (2005).

  15. This discussion prompts the question of where one draws the line between those creatures which have a sufficient degree of consciousness to be taken as having moral significance and those which do not. To this there is, it seems, no clear answer if, as the above view seems to imply, consciousness emerges by the tiniest increments along the spectrum of evolutionary development. Just as, when standing on a very flat shore—such as the beaches of Normandy, for example—there is no clear division between land and sea, so there may be a very broad tideline between morally significant and non-significant creatures. Whether this is seen as a strength or a weakness of the position is left to the reader to decide.

References

  • Anderson M, Anderson SL, Armen C (eds) (2005) Machine ethics. Papers from the AAAI fall Symposium. Technical report FS-05-06. AAAI Press, Menlo Park, CA

  • Calverley D (2005a) Towards a method for determining the legal status of a conscious machine. In: Chrisley R, Clowes R, Torrance S (eds) Next generation approaches to machine consciousness: imagination, development, intersubjectivity, and embodiment (proceedings of an AISB05 Symposium). University of Hertfordshire, Hertfordshire, UK, pp 75–84

  • Calverley D (2005b) Android science and the animals rights movement: are there analogies? Toward social mechanisms of android science. In: Proceedings of CogSci-2005 Workshop, Cognitive Science Society, Stresa, Italy, pp 127–136

  • Di Paolo E (2003) Organismically-inspired robotics: homeostatic adaptation and natural teleology beyond the closed sensorimotor loop. In: Murase K, Asakura T (eds) Dynamical systems approach to embodiment and sociality. Advanced Knowledge International, Adelaide, pp 19–42

  • Di Paolo E (2005) Autopoiesis, adaptivity, teleology, agency. Phenomenol Cogn Sci 4(4):429–452

    Article  Google Scholar 

  • Floridi L, Sanders J (2004) On the morality of artificial agents. Mind Mach 14(3):349–379

    Article  Google Scholar 

  • Franklin S (2003) IDA: a conscious artefact? J Conscious Stud 10(4–5):47–66

    MathSciNet  Google Scholar 

  • Holland O (ed) (2003) Machine consciousness. Imprint Academic, Exeter (also published as special issue of J Conscious Stud 10(4–5))

  • Jonas H (1966) The phenomenon of life: towards a philosophical biology. Northwestern U.P., Evanston, IL, USA

  • Letelier J, Marin G, Mpodozis J (2003) Autopoietic and (M, R) systems. J Theor Biol 222(2):261–272

    Article  Google Scholar 

  • Maturana H, Varela F (1980) Autopoiesis and cognition: the realization of the living. D. Reidel, Dordrecht, Holland

  • Picard R (1997) Affective computing. MIT, Cambridge, MA

    Google Scholar 

  • Strawson PF (1974) Freedom and resentment. In: Strawson PF (ed) Freedom and resentment and other essays. Methuen, London

    Google Scholar 

  • Thompson E (2004) Life and mind: from autopoiesis to neurophenomenology, a tribute to Francisco Varela. Phenomenol Cogn Sci 3(4):381–398

    Article  Google Scholar 

  • Thompson E (2005) Sensorimotor subjectivity and the enactive approach to experience. Phenomenol Cogn Sci 4(4):407–427

    Article  Google Scholar 

  • Thompson E (2007). Mind in life: biology, phenomenology, and the sciences of mind. Harvard University Press

  • Torrance SB (1986) Ethics, mind and artifice. In: Gill KS (ed) AI for society. John, Chichester, pp 55–72

    Google Scholar 

  • Torrance SB (2000) Producing mind. J Exp Theor Artif Intell xxxx

  • Torrance SB (2004) Us and them: living with self-aware systems. In: Smit I, Wallach W, Lasker G (eds) Cognitive, emotive and ethical aspects of decision making in humans and in artificial intelligence, vol III. IIAS, Windsor, ON, pp 7–14

  • Torrance SB (2007) Two conceptions of machine phenomenality. J Conscious Stud, forthcoming

  • United Nations (1948) U.N. Universal declaration of human rights. http://www.unhchr.ch/udhr/index.htm

  • Varela F, Thompson E, Rosch E (1991) The embodied mind: cognitive science and human experience. MIT, Cambridge, MA

    Google Scholar 

  • Weber A, Varela F (2002) Life after kant: natural purposes and the autopoietic foundations of biological individuality. Phenomenol Cogn Sci 1(2):97–125

    Article  Google Scholar 

Download references

Acknowledgments

This paper is the result of long-standing dialogues that the author has had with various members of the Machine Consciousness and Machine Ethics communities. It is a much revised and expanded version of ‘A Robust View of Machine Ethics’, delivered at the AAAI Fall 2005 Symposium on Machine Ethics, Arlington, VA (Anderson et al. 2005). I am grateful, for helpful discussions on aspects of the above paper, to Igor Aleksander, Colin Allen, Michael Anderson, Susan Anderson, Selmer Bringsjord, David Calverley, Ron Chrisley, Robert Clowes, Ruth Crocket, Hanne De Jaegher, Ezequiel Di Paolo, Kathleen Richardson, Aaron Sloman, Iva Smit, Wendell Wallach and Blay Whitby; and also members of the ETHICBOTS group at Middlesex and the PAICS group at Sussex. However, these people may not easily identify, or identify with, the ways their inputs have been taken up and used.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Steve Torrance.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Torrance, S. Ethics and consciousness in artificial agents. AI & Soc 22, 495–521 (2008). https://doi.org/10.1007/s00146-007-0091-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-007-0091-8

Keywords

Navigation