Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Virtuous vs. utilitarian artificial moral agents


Given that artificial moral agents—such as autonomous vehicles, lethal autonomous weapons, and automated trading systems—are now part of the socio-ethical equation, we should morally evaluate their behavior. How should artificial moral agents make decisions? Is one moral theory better suited than others for machine ethics? After briefly overviewing the dominant ethical approaches for building morality into machines, this paper discusses a recent proposal, put forward by Don Howard and Ioan Muntean (2016, 2017), for an artificial moral agent based on virtue theory. While the virtuous artificial moral agent has various strengths, this paper argues that a rule-based utilitarian approach (in contrast to a strict act utilitarian approach) is superior, because it can capture the most important features of the virtue-theoretic approach while realizing additional significant benefits. Specifically, a two-level utilitarian artificial moral agent incorporating both established moral rules and a utility calculator is especially well suited for machine ethics.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4


  1. 1.

    There are approaches to ethics that eschew the big three theoretical traditions; but I would argue that such approaches cannot entirely avoid the concerns and lessons that study of these three traditions has revealed.

  2. 2.

    There are ways of developing AMAs that may not explicitly reference any standard moral theories, but rely upon a set of moral standards derived from common human interests or scientific surveys of preferences.

  3. 3.

    H&M (2016, 2017) prefer the term ‘AAMA’: Artificial Autonomous Moral Agent.

  4. 4.

    H&M (2017: 136–138) have a fourth analogy, that the moral cognition of an AMA can be modeled on machine learning—but this is logically implied by the others, a sort of conclusion to their set of analogies.

  5. 5.

    Their intention is that the virtuous AMA is geared towards specific moral domains such as patient care or autonomous vehicles (H&M 2017: 221), but it seems that with enough computational power it could be extended to handle multiple moral domains simultaneously.

  6. 6.

    One key difference between the two is that Bentham thinks there is only one kind of pleasure (any differences in pleasures are merely conceptual), whereas Mill makes a real distinction between intellectual and bodily pleasures, holding that the intellectual pleasures are superior.

  7. 7.

    Here, neural nets and deep learning could be incorporated towards identifying and establishing new rules or patterns of behavior (but this is a question for software engineers to decide).

  8. 8.

    Hooker (2000: 89) argues that conflicts between rules should not be resolved by applying Act U, partly because people would lose confidence in the system of rules; however, this is not a worry for machine ethics since confidence does not enter the equation.

  9. 9.

    Thanks to an anonymous reviewer for raising this objection.


  1. Anderson SL, Anderson M (2011) A prima facie duty approach to machine ethics and its application to elder care. American Association for Artificial Intelligence, Menlo Park, pp 2–7

  2. Anderson M, Anderson SL, Armen C (2004) Towards machine ethics: implementing two action-based ethical theories. American Association for Artificial Intelligence, Menlo Park

  3. Anderson M, Anderson SL, Armen C (2006) MedEthEx: a prototype medical ethics advisor. American Association for Artificial Intelligence, Menlo Park, pp 1759–1765

  4. Aristotle (350 BCE) Nicomachean ethics. In: Ross WD (trans) The internet classics archive. Accessed 7 Oct 2018

  5. Bentham J (1789) An introduction to the principles of morals and legislation (in the version by Jonathan Bennett presented at

  6. Bringsjord S, Konstantine A, Bello P (2006) Toward a general logicist methodology for engineering ethically correct robots. IEEE Intell Syst 21(4):38–44

  7. Doyle J (1983) What is rational psychology? Toward a modern mental philosophy. AI Mag 4(3):50–53

  8. Floridi L (2013) The ethics of information. Oxford University Press, New York

  9. Foot P (1967) The problem of abortion and the doctrine of double effect. Oxf Rev 5:5–15

  10. Grau C (2006) There is no “I” in “robot”: robots and utilitarianism. IEEE Intell Syst 21(4):52–55

  11. Hare RM (1983) Moral thinking: its levels, method, and point. Oxford University Press, New York

  12. Hooker B (2000) Ideal code, real world. Oxford University Press, New York

  13. Howard D, Muntean I (2016) A minimalist model of the artificial autonomous moral agent (AAMA). Association for the Advancement of Artificial Intelligence, Menlo Park

  14. Howard D, Muntean I (2017) Artificial moral cognition: moral functionalism and autonomous moral agency. In: Powers TM (ed) Philosophy and computing. Philosophical studies series, vol 128. Springer, New York, pp 121–160

  15. Jackson F (1998) From metaphysics to ethics: a defence of conceptual analysis. Oxford University Press, New York

  16. Kahneman D (2011) Thinking, fast and slow. Farrar, Straus, and Giroux, New York

  17. Kant I (1785) Groundwork for the metaphysics of morals (in the version by J. Bennett presented at

  18. Leben D (2017) A Rawlsian algorithm for autonomous vehicles. Ethics Inf Technol 19:107–115

  19. Mill JS (1861) Utilitarianism (in the version by Jonathan Bennett presented at

  20. Nathanson S (2018) Act and rule utilitarianism. In: Internet encyclopedia of philosophy. Accessed 7 Oct 2018

  21. Powers TM (2006) Prospects for a Kantian machine. IEEE Intell Syst 21(4):46–51

  22. Rawls J (1971) A theory of justice. Belknap (Harvard University Press), Cambridge

  23. Ross WD (1930) The right and the good. Oxford University Press, New York

  24. Singer P (2011) The expanding circle: ethics, evolution, and moral progress. Princeton University Press, Princeton (revised edition; original published in 1981)

  25. Varner G (2012) Personhood, ethics, and animal cognition: situating animals in Hare’s two level utilitarianism. Oxford University Press, New York

  26. Wallach W, Allen C (2009) Moral machines: teaching robots right from wrong. Oxford University Press, New York

Download references


Thanks to participants at the Brain-Based and Artificial Intelligence workshop held at the Center for the Study of Ethics in the Professions, Illinois Institute of Technology (May 10–11, 2018), for helpful questions and discussion. Thanks also to two anonymous reviewers for helpful comments. Images of the stop and yield signs are courtesy of

Author information

Correspondence to William A. Bauer.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Bauer, W.A. Virtuous vs. utilitarian artificial moral agents. AI & Soc 35, 263–271 (2020).

Download citation


  • Machine ethics
  • Artificial moral agent
  • Machine learning
  • Virtue theory
  • Two-level utilitarianism