Skip to main content

Toward Implementing the ADC Model of Moral Judgment in Autonomous Vehicles


Autonomous vehicles (AVs)—and accidents they are involved in—attest to the urgent need to consider the ethics of artificial intelligence (AI). The question dominating the discussion so far has been whether we want AVs to behave in a ‘selfish’ or utilitarian manner. Rather than considering modeling self-driving cars on a single moral system like utilitarianism, one possible way to approach programming for AI would be to reflect recent work in neuroethics. The agent–deed–consequence (ADC) model (Dubljević and Racine in AJOB Neurosci 5(4):3–20, 2014a, Behav Brain Sci 37(5):487–488, 2014b) provides a promising descriptive and normative account while also lending itself well to implementation in AI. The ADC model explains moral judgments by breaking them down into positive or negative intuitive evaluations of the agent, deed, and consequence in any given situation. These intuitive evaluations combine to produce a positive or negative judgment of moral acceptability. For example, the overall judgment of moral acceptability in a situation in which someone committed a deed that is judged as negative (e.g., breaking a law) would be mitigated if the agent had good intentions and the action had a good consequence. This explains the considerable flexibility and stability of human moral judgment that has yet to be replicated in AI. This paper examines the advantages and disadvantages of implementing the ADC model and how the model could inform future work on ethics of AI in general.

This is a preview of subscription content, access via your institution.


  1. For instance, machine learning makes learned ethical rules opaque, thereby making transparency impossible. Additionally, disambiguating ethical from unethical discriminations or generalizations is no simple task, as the examples of racist chat-bots attest to. See Abel et al. (2016). See also Misselhorn (2018).

  2. This problem is not limited to AVs. A recurring theme in machine ethics is that humans will break rules and this makes for implementing ethics in AI “very challenging” (Bringsjord et al. 2006, p. 12). See also Abel et al. (2016).

  3. For purposes of this argument, the label ‘terrorist’ is used for any malicious actor that deliberately targets civilians, regardless of the ideology. So, ‘ISIS’ fighters, white supremacists and even individuals targeting others to protest their ‘involuntary celibacy’ all fall under the same term. Even though there is no space to argue for that here, utilitarianism fails to incorporate any kind of malicious intent, and would be likely exploited even more frequently in less tragic ways, say to commit acts of vandalism. I am grateful to Kevin Richardson for constructive comments that prompted me to make this clear.

  4. Hybrid approaches avoid the difficulties in both top-down (programming rigid rules) and bottom-up (relying solely on machine learning), and combine their strengths. As Wallach rightly notes “Engineers typically draw on both a top-down analysis and a bottom-up assembly of components in building complex automata. If the system fails to perform as designed, the control architecture is adjusted, software parameters are refined, and new components are added. In building a system from the bottom-up the learning can be that of the engineer or by the system itself, facilitated by built-in self-organizing mechanism, or as it explores its environment and the accommodation of new information.” (Wallach 2008, p. 468). Additionally, as Bringsjord and colleagues note, implementation must “begin by selecting an ethics code C intended to regulate the behavior of R [robots]. […] C would normally be expressed by philosophers essentially in English [or another natural language] […before] formalization of C in some computational logic L, whose well-formed formulas and proof theory are specified” (2006, p. 3).

  5. Unlike trolley problems, which are simple binary choices, vignettes and situations designed for the ADC approach have eight distinct versions and can capture weights that people assign to these factors in dramatic or mundane situations (see Dubljević et al. 2018). At this point, one can be neutral on the computer engineering question of implementation via classical symbol system or connectionist/neural network system or even a compatibilist connectionist-simulated-on-classical system. The main concern is only that the AVs should be able to encode the ADC model of moral judgment. The problem is still at the level of human agreement on a specific code to be implemented. As Bringsjord and colleagues rightly note “if humans cannot formulate an ethical code […] [a] logic-based approach is impotent” (2006, p. 13). I am grateful to Ron Endicott for constructive comments that prompted me to make this explicit.

  6. It is perhaps possible that a more complex version of utilitarian-inspired decision making algorithms would fare better in this regard, but to my knowledge, no current work on utilitarian AVs is entertaining malicious intent, or the difference between low and high stakes situations, as serious issues for implementation. I am grateful to Bill Bauer for constructive comments that prompted me to make this clear.

  7. I’m grateful to Michael Pendlebury and other audience members at the “Work in progress in philosophy” session at NC State University, on Oct 26th, 2018, for helpful and constructive comments that prompted this distinction.

  8. Indeed, ‘hit-and-run’ incidents are the most likely moral situation that AVs will encounter, but the difference in ‘high-’ and ‘low-stakes’ moral situations is a crucial addition. More on that below.

  9. The assumption here is that implementation of the transponder system would be mandatory at vehicle registration for both AVs and regular vehicles.

  10. This might need to be qualified with an override preventing those human passengers from taking control of the AV, so as to thwart any malicious or nefarious plans of exploiting the system.


  • Abel, D., MacGlashan, J., & Littman, M. L. (2016). Reinforcement learning as a framework for ethical decision making. In B. Bonet et al. (Eds.), AAAI workshop: AI, ethics, and society. AAAI workshops. AAAI Press.

  • Anderson, S. (2008). Asimov’s ‘three laws of robotics’ and machine metaethics. AI & SOCIETY, 22(4), 477–493.

    Article  Google Scholar 

  • Anderson, M., & Anderson, S. (2007). The status of machine ethics: A report from the AAAI symposium. Minds and Machines, 17(1), 1–10.

    Article  Google Scholar 

  • Anderson, M., Anderson, S., & Armen, C. (2006). MedEthEx: A prototype medical ethics advisor. In Proceedings of the national conference on artificial intelligence (p. 1759). MIT Press.

  • Awad, E., Dsouza, S., Kim, R., Schulz, R., Henrich, J., Shariff, A., et al. (2018). The Moral Machine Experiment.

    Article  Google Scholar 

  • Beauchamp, T. L., & Childress, J. F. (2013). Principles of biomedical ethics (7th ed.). New York: Oxford University Press.

    Google Scholar 

  • Bonnefon, J., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science.

    Article  Google Scholar 

  • Bonnemains, V., Saurel, C., & Tessier, C. (2018). Embedded ethics: Some technical and ethical challenges. Ethics and Information Technology, 20, 41–58.

    Article  Google Scholar 

  • Bringsjord, S., Arkoudas, K., & Bello, P. (2006). Toward a general logicist methodology for engineering ethically correct robots. IEEE Intelligent Systems, 21(4), 38–44.

    Article  Google Scholar 

  • Christensen, J. F., & Gomila, A. (2012). Moral dilemmas in cognitive neuroscience of moral decision-making: A principled review. Neuroscience and Biobehavioral Reviews, 36, 1249–1264.

    Article  Google Scholar 

  • Deng, B. (2015). Machine ethics: The robot’s dilemma. Nature, 523, 24–26.

    Article  Google Scholar 

  • Dennis, L., Fisher, M., Slavkovik, M., & Webster, M. (2016). Formal verification of ethical choices in autonomous systems. Robotics and Autonomous Systems, 77, 1–14.

    Article  Google Scholar 

  • Dewey, J. (1929). The quest for certainty: A study of the relation of knowledge and action. New York: Milton, Balch & Company.

    Google Scholar 

  • Dubljević, V., & Racine, E. (2014a). The ADC of moral judgment: Opening the black box of moral intuitions with heuristics about agents, deeds and consequences. AJOB Neuroscience, 5(4), 3–20.

    Article  Google Scholar 

  • Dubljević, V., & Racine, E. (2014b). A single cognitive heuristic process meets the complexity of domain-specific moral heuristics. Behavioral and Brain Sciences, 37(5), 487–488.

    Article  Google Scholar 

  • Dubljević, V., & Racine, E. (2017). Moral enhancement meets normative and empirical reality: assessing the practical feasibility of moral enhancement neurotechnology. Bioethics, 31(5), 338–348.

    Article  Google Scholar 

  • Dubljević, V., Sattler, S., & Racine, E. (2018). Deciphering moral intuition: How agents, deeds and consequences influence moral judgment. PLoS ONE.

    Article  Google Scholar 

  • Fournier, T. (2016). Will my next car be libertarian or utilitarian? Who will decide? IEEE Technology & Society, 35(2), 40–45.

    Article  Google Scholar 

  • Grau, C. (2006). There is no ‘I’ in ‘Robot’: Robots and utilitarianism. IEEE Intelligent Systems, 21(4), 52–55.

    Article  Google Scholar 

  • Hopkins, N., Chrisafis, A., & Fischer, S. (2016). Bastille day attack: ‘Hysterical crowds were running from death. The Guardian. Accessed November 15, 2018.

  • Hursthouse, R. (1999). On virtue ethics. Oxford: Oxford University Press.

    Google Scholar 

  • Leben, D. (2017). A Rawlsian algorithm for autonomous vehicles. Ethics and Information Technology, 19, 107–115.

    Article  Google Scholar 

  • Lord Bowden of Chesterfield. (1985). The story of IFF (identification friend or foe). IEE Proceedings, 132(6 pt. A), 435–437.

    Google Scholar 

  • Luetge, C. (2017). The German ethics code for automated and connected driving. Philosophy & Technology, 30, 547–558.

    Article  Google Scholar 

  • Misselhorn, C. (2018). Artificial morality: Concepts, issues and challenges. Society, 55, 161–169.

    Article  Google Scholar 

  • Powers, T. M. (2006). Prospects for a Kantian machine. IEEE Intelligent Systems, 21(4), 46–51.

    Article  Google Scholar 

  • Shariff, A., Bonnefon, J.-F., & Rahwan, I. (2017). Psychological roadblocks to the adoption of self-driving vehicles. Nature Human Behavior.

    Article  Google Scholar 

  • Singhvi, A., & Russel, K. (2016). Inside the self-driving tesla fatal accident. The New York Times. Accessed November, 2018.

  • Spielthenner, G. (2017). The is-ought problem in practical ethics. HEC Forum, 29(4), 277–292.

    Article  Google Scholar 

  • Tonkens, R. (2012). Out of character: On creation of virtuous machines. Ethics and Information Technology, 14, 137–149.

    Article  Google Scholar 

  • Waldrop, M. M. (2015). Autonomous vehicles: No drivers required. Nature, 518, 20–23.

    Article  Google Scholar 

  • Wallach, W. (2008). Implementing moral decision making faculties in computers and robots. AI & SOCIETY, 22, 463–475.

    Article  Google Scholar 

  • Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to Veljko Dubljević.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Dubljević, V. Toward Implementing the ADC Model of Moral Judgment in Autonomous Vehicles. Sci Eng Ethics 26, 2461–2472 (2020).

Download citation

  • Published:

  • Issue Date:

  • DOI:


  • Agent–deed–consequence (ADC) model
  • Autonomous vehicles (AVs)
  • Artificial intelligence (AI)
  • Artificial neural networks
  • Artificial morality
  • Neuroethics