From Moral Agents to Moral Factors: The Structural Ethics Approach

  • Philip Brey
Part of the Philosophy of Engineering and Technology book series (POET, volume 17)


It has become a popular position in the philosophy of technology to claim that some or all technological artifacts can qualify as moral agents. This position has been developed to account for the moral role of technological artifacts in society and to help clarify the moral responsibility of engineers in design. In this paper, I will evaluate various positions in favor of the view that technological artifacts are or can be moral agents. I will find that these positions, while expressing important insights about the moral role of technological artifacts, are ultimately lacking because they obscure important differences between human moral agents and technological artifacts. I then develop an alternative view, which does not ascribe moral agency to artifacts, but does attribute to them important moral roles. I call this approach structural ethics. Structural ethics is complementary to individual ethics, which is the ethical study of individual human agents and their behaviors. Structural ethics focuses on ethical aspects of social and material networks and arrangements, and their components, which include humans, animals, artifacts, natural objects, and complex structures composed of such entities, like organizations. In structural ethics, components of networks that have moral implications are called moral factors. Artifact ethics is the study of individual artifacts within structural ethics. It studies how technological artifacts may have a role as moral factors in various kinds of social and material arrangements as well as across arrangements. I argue that structural ethics and artifact ethics provide a sound alternative to approaches that attribute moral agency to artifacts. I end by arguing that some advanced future technological systems, such as robots, may have capacities for moral deliberation which may make them resemble human moral agents, but that even such systems will likely lack important features of human agents which they need to qualify as full-blown human agents.


Moral Responsibility Human Agent Moral Agent Intelligent Agent Intentional State 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence, 12(3), 251–261.CrossRefGoogle Scholar
  2. Davidson, D. (1980). Essays on actions and events. Oxford: Oxford University Press.Google Scholar
  3. Eshleman, A. (2009). Moral responsibility. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Winter 2009 ed.). URL:
  4. Floridi, L., & Sanders, J. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379.CrossRefGoogle Scholar
  5. Himma, K. (2009). Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? Ethics and Information Technology, 11(1), 19–29.CrossRefGoogle Scholar
  6. Johnson, D. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8, 195–204.CrossRefGoogle Scholar
  7. Johnson, D., & Powers, T. (2008). Computers as surrogate agents. In M. J. van den Hoven & J. Weckert (Eds.), Information technology and moral philosophy. Cambridge: Cambridge University Press.Google Scholar
  8. Jonas, H. (1984). The imperative of responsibility: In search of ethics for the technological age (H. Jonas & D. Herr, Trans.). Chicago: University of Chicago Press.Google Scholar
  9. Keulartz, J., Korthals, M., Schermer, M., & Swierstra, T. (2004). Pragmatism in progress: A reply to Radder, Colapietro and Pitt. Techné: Research in Philosophy and Technology, 7(3), 38–48.Google Scholar
  10. Latour, B. (1987). Science in action. Cambridge, MA: Harvard University Press.Google Scholar
  11. Latour, B. (1992). Where are the missing masses? The sociology of a few mundane artifacts. In W. Bijker & J. Law (Eds.), Shaping technology/building society: Studies in sociotechnical change. Cambridge: MIT Press.Google Scholar
  12. Powers, T., & Johnson, D. (2004). The moral agency of technology. Paper presented at the 2004 workshop on understanding new directions in ethics and technology, University of Virginia. Unpublished, 28 pp. Available online at
  13. Searle, J. (1984). Intentionality and its place in nature. Synthese, 61(1), 3–16.CrossRefGoogle Scholar
  14. Stahl, B. (2004). Information, ethics, and computers: The problem of autonomous moral agents. Minds and Machines, 14(1), 67–83.CrossRefGoogle Scholar
  15. Stahl, B. (2006). Responsible computers? A case for ascribing quasi-responsibility to computers independent of personhood or agency. Ethics and Information Technology, 8, 205–213.CrossRefGoogle Scholar
  16. Sullins, J. (2006). When is a robot a moral agent? International Journal of Information Ethics, 6, 12. Retrieved at
  17. Verbeek, P. P. (2005). What things do: Philosophical reflections on technology, agency, and design. University Park: Penn State University Press.Google Scholar
  18. Verbeek, P. P. (2008). Obstetric ultrasound and the technological mediation of morality – A postphenomenological analysis. Human Studies, 31(1), 11–26.CrossRefGoogle Scholar
  19. Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.Google Scholar
  20. Watson, G. (1996). Two faces of responsibility. Philosophical Topics, 24, 227–248.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2014

Authors and Affiliations

  1. 1.Department of PhilosophyUniversity of TwenteEnschedeThe Netherlands

Personalised recommendations