Minds and Machines

, Volume 14, Issue 3, pp 349–379 | Cite as

On the Morality of Artificial Agents

  • Luciano Floridi
  • J.W. Sanders


Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most interestingly for us, of AAs). We conclude that there is substantial and important scope, particularly in Computer Ethics, for the concept of moral agent not necessarily exhibiting free will, mental states or responsibility. This complements the more traditional approach, common at least since Montaigne and Descartes, which considers whether or not (artificial) agents have mental states, feelings, emotions and so on. By focussing directly on ‘mind-less morality’ we are able to avoid that question and also many of the concerns of Artificial Intelligence. A vital component in our approach is the ‘Method of Abstraction’ for analysing the level of abstraction (LoA) at which an agent is considered to act. The LoA is determined by the way in which one chooses to describe, analyse and discuss a system and its context. The ‘Method of Abstraction’ is explained in terms of an ‘interface’ or set of features or observables at a given ‘LoA’. Agenthood, and in particular moral agenthood, depends on a LoA. Our guidelines for agenthood are: interactivity (response to stimulus by change of state), autonomy (ability to change state without stimulus) and adaptability (ability to change the ‘transition rules’ by which state is changed) at a given LoA. Morality may be thought of as a ‘threshold’ defined on the observables in the interface determining the LoA under consideration. An agent is morally good if its actions all respect that threshold; and it is morally evil if some action violates it. That view is particularly informative when the agent constitutes a software or digital system, and the observables are numerical. Finally we review the consequences for Computer Ethics of our approach. In conclusion, this approach facilitates the discussion of the morality of agents not only in Cyberspace but also in the biosphere, where animals can be considered moral agents without their having to display free will, emotions or mental states, and in social contexts, where systems like organizations can play the role of moral agents. The primary ‘cost’ of this facility is the extension of the class of agents and moral agents to embrace AAs.

artificial agents computer ethics levels of abstraction moral responsibility 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Allen, C., Varner, G. and Zinser, J. (2000), 'Prolegomena to Any Future Artificial Moral Agent', Journal of Experimental and Theoretical Artificial Intelligence 12, pp. 251–261.CrossRefGoogle Scholar
  2. Arnold, A.(1994), Finite Transition Systems, Prentice-Hall International Series in Computer Science.Google Scholar
  3. Badham, J. (director) (1983), War Games.Google Scholar
  4. Bedau, M.A. (1996) 'The Nature of Life', in M.A. Boden, ed., The Philosophy of Life, Oxford University Press, pp. 332–357.Google Scholar
  5. Cassirer, E. (1953), Substance and Function and Einstein's Theory of Relativity, New York: Dover Publications Edition.Google Scholar
  6. Danielson, P. (1992), Artificial Morality: Virtuous Robots for Virtual Games, Routledge, NY.Google Scholar
  7. Dennet, D. (1997), When HAL Kills, Who's to Blame?' in D. Stork, ed., HAL's Legacy: 2001's Computer as Dream and Reality, Cambridge MA: MIT Press, pp. 351–365.Google Scholar
  8. Dixon, B.A. (1995), 'Response: Evil and the Moral Agency of Animals', Between the Species 11(1-2), pp. 38–40.Google Scholar
  9. Epstein, R.G. (1997), The Case of the Killer Robot, John Wiley and Sons, Inc.Google Scholar
  10. Floridi, L. (1999), 'Information Ethics: On the Theoretical Foundations of Computer Ethics', Ethics and Information Technology 1(1), pp. 37–56. Preprint from http://www.wolfson. floridi/papers.htm.CrossRefGoogle Scholar
  11. Floridi, L. (2003), 'On the Intrinsic Value of Information Objects and the Infosphere', Ethics and Information Technology 4(4), pp. 287–304. Preprint from http://www.wolfson. floridi/papers.htm.CrossRefGoogle Scholar
  12. Floridi, L. (2001a), 'Information Ethics: An Environmental Approach to the Digital Divide', UNESCO World Commission on the Ethics of Scientific Knowledge and Technology (COMEST), First Meeting of the Sub-Commission on the Ethics of the Information Society (UNESCO, Paris, June 18-19, 2001). Preprint from floridi/papers.htm.Google Scholar
  13. Floridi, L. (2001b), 'Ethics in the Infosphere', The Philosophers' Magazine 6, pp. 18–19. Preprint from floridi/papers.htm.Google Scholar
  14. Floridi, L. and Sanders, J.W. (2001), 'Artificial Evil and the Foundation of Computer Ethics, Ethics and Information Technology 3(1), pp. 55–66. Preprint from http://www.wolfson. Scholar
  15. Floridi, L. and Sanders, J.W. (2003a), 'The Method of Abstraction', in M. Negrotti, ed., The Yearbook of the Artificial. Issue II, Peter Lang, Bern. Preprint from http://www.wolfson. floridi/papers.htm.Google Scholar
  16. Floridi, L. and Sanders, J.W. (2003b), 'Internet Ethics: The Constructionist Values of Homo Poieticus', in R. Cavalier, ed., The Impact of the Internet on Our Moral Lives, SUNY, Fall.Google Scholar
  17. Franklin, S. and Graesser, A. (1996), 'Is it an Agent, or Just a Program? A Taxonomy for Autonomous Agents', in Proceedings of the Third International Workshop on Agent Theories, Architectures, and Languages, Springer-Verlag Available from <www.msci.memphis. edu/ franklin/AgentProg.html>.Google Scholar
  18. Gips, J. (1995), 'Towards the Ethical Robot', in K. Ford, C. Glymour and P. Hayes, ed., Android Epistemology, Cambridge MA: MIT Press, pp. 243–252.Google Scholar
  19. Goldberg, D.E. (1989), Genetic Algorithms in Search, Optimization and Machine Learning, Reading, MA: Addison-Wesley.Google Scholar
  20. Kerr, P. (1996), The Grid, New York: Warner Books.Google Scholar
  21. Michie, D. (1961), 'Trial and Error', in A. Garratt, ed., Penguin Science Surveys, Harmondsworth: Penguin, pp. 129–145.Google Scholar
  22. Mitchell, T.M. (1997), Machine Learning, McGraw Hill.Google Scholar
  23. Moore, J.H. ed., (2001), 'The Turing Test: Past, Present and Future', Minds and Machines 11(1).Google Scholar
  24. Motwani, R. and Raghavan, P. (1995), Randomized Algorithms, Cambridge: Cambridge University Press.Google Scholar
  25. Norton AntiVirus (2003), Version 8.07.17C. Symantec Corporation, copyright 2003. Page, I. and Luk, W., 'Compiling Occam into Field-Programmable Gate Arrays', ftp:// hwcomp.lGoogle Scholar
  26. Rosenfeld, R. (1995a), 'Can Animals Be Evil?: Kekes' Character-Morality, the Hard Reaction to Evil, and Animals, Between the Species 11(1-2), pp. 33–38.Google Scholar
  27. Rosenfeld, R. (1995b), 'Reply', Between the Species 11(1-2), pp. 40–41.Google Scholar
  28. Rowlands, M. (2000), The Environmental Crisis-Understanding the Value of Nature, Palgrave, London-Basingstoke.Google Scholar
  29. Russell, S. and Norvig, P. (2003), Artificial Intelligence: A Modern Introduction, 2nd edition, Prentice-Hall International.Google Scholar
  30. Sample, I. (2001), SmartPaint, New Scientist, Scholar
  31. Scott, R. (director), (1982/1991), Bladerunner, The Director's Cut.Google Scholar
  32. Turing, A.M. (1950), 'Computing Machinery and Intelligence', Mind 59(236), pp. 433–460.Google Scholar

Copyright information

© Kluwer Academic Publishers 2004

Authors and Affiliations

  • Luciano Floridi
    • 1
  • J.W. Sanders
    • 1
  1. 1.Information Ethics GroupUniversity of OxfordUK

Personalised recommendations