Skip to main content
Log in

Un-making artificial moral agents

  • Published:
Ethics and Information Technology Aims and scope Submit manuscript


Floridi and Sanders, seminal work, “On the morality of artificial agents” has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as “artificial agents.” Floridi and Sanders argue that the class of entities considered moral agents can be expanded to include computers if we adopt the appropriate level of abstraction. In this paper we argue that the move to distinguish levels of abstraction is far from decisive on this issue. We also argue that adopting certain levels of abstraction out of context can be dangerous when the level of abstraction obscures the humans who constitute computer systems. We arrive at this critique of Floridi and Sanders by examining the debate over the moral status of computer systems using the notion of interpretive flexibility. We frame the debate as a struggle over the meaning and significance of computer systems that behave independently, and not as a debate about the ‘true’ status of autonomous systems. Our analysis leads to the conclusion that while levels of abstraction are useful for particular purposes, when it comes to agency and responsibility, computer systems should be conceptualized and identified in ways that keep them tethered to the humans who create and deploy them.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others



Science and Technology Studies


  • Robert Andrews. A Decade After Kasporov’s Defeat, Deep Blue Coder Relives Victory. Wired. May 11, 2007. Available at

  • Wiebe Bijker and Trevor Pinch. The Social Construction of Facts and Artifacts. In Wiebe Bijker, Thomas Hughes and Trevor Pinch, editors, The Social Construction of Technological Systems, pp. 17–51. MIT Press, Cambridge, Mass., 1987.

  • Luciano Floridi and Jeff W. Sanders. Artificial Evil and the Foundation of Computer Ethics. Ethics and Information Technology, 3(1): 55–66, 2001.

  • Luciano Floridi and Jeff W. Sanders. On the Morality of Artificial Agents. Minds and Machines, 14(3): 349–379, 2004.

  • Frances S. Grodzinsky, Keith W. Miller and Marty J. Wolf. The Ethics of Designing Artificial Agents. CEPE. San Diego, CA, July 12–14, 2007. Abstract Available at

  • Deborah G. Johnson. Computer Systems: Moral Entities but not Moral Agents. Ethics and Information Technology, 8(4): 195–204, 2006.

  • Deborah G. Johnson. Nanoethics: An Essay on Ethics and Technology ‘in the making’. Nanoethics, 1(1): 21–30, 2007.

  • Deborah G. Johnson and Thomas M. Powers. Computers as Surrogate Agents. In J. Van Den Hoven and J. Weckert, editors, Information Technology and Moral Philosophy. Cambridge University Press, Cambridge, 2008.

  • Bill Joy. Why the Future Doesn’t Need Us. WIRED, 8(4): 238–262, 2000.

    Google Scholar 

  • John L. Pollock. When is a Work Around? Conflict and Negotiation in Computer Systems Development. Science, Technology & Human Values, 30(4): 496–514, 2005.

  • John Sullins. Ethics and Artificial Life: From Modeling to Moral Agents. Ethics and Information Technology, 7(3): 139–148, 2005.

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to Deborah G. Johnson.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Johnson, D.G., Miller, K.W. Un-making artificial moral agents. Ethics Inf Technol 10, 123–133 (2008).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: