Skip to main content
Log in

On the Morality of Artificial Agents

  • Published:
Minds and Machines Aims and scope Submit manuscript

Abstract

Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most interestingly for us, of AAs). We conclude that there is substantial and important scope, particularly in Computer Ethics, for the concept of moral agent not necessarily exhibiting free will, mental states or responsibility. This complements the more traditional approach, common at least since Montaigne and Descartes, which considers whether or not (artificial) agents have mental states, feelings, emotions and so on. By focussing directly on ‘mind-less morality’ we are able to avoid that question and also many of the concerns of Artificial Intelligence. A vital component in our approach is the ‘Method of Abstraction’ for analysing the level of abstraction (LoA) at which an agent is considered to act. The LoA is determined by the way in which one chooses to describe, analyse and discuss a system and its context. The ‘Method of Abstraction’ is explained in terms of an ‘interface’ or set of features or observables at a given ‘LoA’. Agenthood, and in particular moral agenthood, depends on a LoA. Our guidelines for agenthood are: interactivity (response to stimulus by change of state), autonomy (ability to change state without stimulus) and adaptability (ability to change the ‘transition rules’ by which state is changed) at a given LoA. Morality may be thought of as a ‘threshold’ defined on the observables in the interface determining the LoA under consideration. An agent is morally good if its actions all respect that threshold; and it is morally evil if some action violates it. That view is particularly informative when the agent constitutes a software or digital system, and the observables are numerical. Finally we review the consequences for Computer Ethics of our approach. In conclusion, this approach facilitates the discussion of the morality of agents not only in Cyberspace but also in the biosphere, where animals can be considered moral agents without their having to display free will, emotions or mental states, and in social contexts, where systems like organizations can play the role of moral agents. The primary ‘cost’ of this facility is the extension of the class of agents and moral agents to embrace AAs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Rferences

  • Allen, C., Varner, G. and Zinser, J. (2000), 'Prolegomena to Any Future Artificial Moral Agent', Journal of Experimental and Theoretical Artificial Intelligence 12, pp. 251–261.

    Article  Google Scholar 

  • Arnold, A.(1994), Finite Transition Systems, Prentice-Hall International Series in Computer Science.

  • Badham, J. (director) (1983), War Games.

  • Bedau, M.A. (1996) 'The Nature of Life', in M.A. Boden, ed., The Philosophy of Life, Oxford University Press, pp. 332–357.

    Google Scholar 

  • Cassirer, E. (1953), Substance and Function and Einstein's Theory of Relativity, New York: Dover Publications Edition.

    Google Scholar 

  • Danielson, P. (1992), Artificial Morality: Virtuous Robots for Virtual Games, Routledge, NY.

    Google Scholar 

  • Dennet, D. (1997), When HAL Kills, Who's to Blame?' in D. Stork, ed., HAL's Legacy: 2001's Computer as Dream and Reality, Cambridge MA: MIT Press, pp. 351–365.

    Google Scholar 

  • Dixon, B.A. (1995), 'Response: Evil and the Moral Agency of Animals', Between the Species 11(1-2), pp. 38–40.

    Google Scholar 

  • Epstein, R.G. (1997), The Case of the Killer Robot, John Wiley and Sons, Inc.

  • Floridi, L. (1999), 'Information Ethics: On the Theoretical Foundations of Computer Ethics', Ethics and Information Technology 1(1), pp. 37–56. Preprint from http://www.wolfson. ox.ac.uk/ floridi/papers.htm.

    Article  Google Scholar 

  • Floridi, L. (2003), 'On the Intrinsic Value of Information Objects and the Infosphere', Ethics and Information Technology 4(4), pp. 287–304. Preprint from http://www.wolfson. ox.ac.uk/ floridi/papers.htm.

    Article  Google Scholar 

  • Floridi, L. (2001a), 'Information Ethics: An Environmental Approach to the Digital Divide', UNESCO World Commission on the Ethics of Scientific Knowledge and Technology (COMEST), First Meeting of the Sub-Commission on the Ethics of the Information Society (UNESCO, Paris, June 18-19, 2001). Preprint from http://www.wolfson.ox.ac.uk/ floridi/papers.htm.

    Google Scholar 

  • Floridi, L. (2001b), 'Ethics in the Infosphere', The Philosophers' Magazine 6, pp. 18–19. Preprint from http://www.wolfson.ox.ac.uk/ floridi/papers.htm.

    Google Scholar 

  • Floridi, L. and Sanders, J.W. (2001), 'Artificial Evil and the Foundation of Computer Ethics, Ethics and Information Technology 3(1), pp. 55–66. Preprint from http://www.wolfson. ox.ac.uk/floridi/papers.htm.

    Article  Google Scholar 

  • Floridi, L. and Sanders, J.W. (2003a), 'The Method of Abstraction', in M. Negrotti, ed., The Yearbook of the Artificial. Issue II, Peter Lang, Bern. Preprint from http://www.wolfson. ox.ac.uk/ floridi/papers.htm.

  • Floridi, L. and Sanders, J.W. (2003b), 'Internet Ethics: The Constructionist Values of Homo Poieticus', in R. Cavalier, ed., The Impact of the Internet on Our Moral Lives, SUNY, Fall.

  • Franklin, S. and Graesser, A. (1996), 'Is it an Agent, or Just a Program? A Taxonomy for Autonomous Agents', in Proceedings of the Third International Workshop on Agent Theories, Architectures, and Languages, Springer-Verlag Available from <www.msci.memphis. edu/ franklin/AgentProg.html>.

  • Gips, J. (1995), 'Towards the Ethical Robot', in K. Ford, C. Glymour and P. Hayes, ed., Android Epistemology, Cambridge MA: MIT Press, pp. 243–252.

    Google Scholar 

  • Goldberg, D.E. (1989), Genetic Algorithms in Search, Optimization and Machine Learning, Reading, MA: Addison-Wesley.

    Google Scholar 

  • Kerr, P. (1996), The Grid, New York: Warner Books.

    Google Scholar 

  • Michie, D. (1961), 'Trial and Error', in A. Garratt, ed., Penguin Science Surveys, Harmondsworth: Penguin, pp. 129–145.

    Google Scholar 

  • Mitchell, T.M. (1997), Machine Learning, McGraw Hill.

  • Moore, J.H. ed., (2001), 'The Turing Test: Past, Present and Future', Minds and Machines 11(1).

  • Motwani, R. and Raghavan, P. (1995), Randomized Algorithms, Cambridge: Cambridge University Press.

  • Norton AntiVirus (2003), Version 8.07.17C. Symantec Corporation, copyright 2003. Page, I. and Luk, W., 'Compiling Occam into Field-Programmable Gate Arrays', ftp:// ftp.comlab.ox.ac.uk/pub/Documents/techpapers/Ian.Page/abs hwcomp.l

  • Rosenfeld, R. (1995a), 'Can Animals Be Evil?: Kekes' Character-Morality, the Hard Reaction to Evil, and Animals, Between the Species 11(1-2), pp. 33–38.

    Google Scholar 

  • Rosenfeld, R. (1995b), 'Reply', Between the Species 11(1-2), pp. 40–41.

    Google Scholar 

  • Rowlands, M. (2000), The Environmental Crisis-Understanding the Value of Nature, Palgrave, London-Basingstoke.

    Google Scholar 

  • Russell, S. and Norvig, P. (2003), Artificial Intelligence: A Modern Introduction, 2nd edition, Prentice-Hall International.

  • Sample, I. (2001), SmartPaint, New Scientist, http://www.globaltechnoscan.com/16May-22May01/paint.htm.

  • Scott, R. (director), (1982/1991), Bladerunner, The Director's Cut.

  • Turing, A.M. (1950), 'Computing Machinery and Intelligence', Mind 59(236), pp. 433–460.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Floridi, L., Sanders, J. On the Morality of Artificial Agents. Minds and Machines 14, 349–379 (2004). https://doi.org/10.1023/B:MIND.0000035461.63578.9d

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/B:MIND.0000035461.63578.9d

Navigation