Ethics and Information Technology

, Volume 18, Issue 2, pp 103–115 | Cite as

Against the moral Turing test: accountable design and the moral reasoning of autonomous systems

  • Thomas Arnold
  • Matthias Scheutz
Original Paper


This paper argues against the moral Turing test (MTT) as a framework for evaluating the moral performance of autonomous systems. Though the term has been carefully introduced, considered, and cautioned about in previous discussions (Allen et al. in J Exp Theor Artif Intell 12(3):251–261, 2000; Allen and Wallach 2009), it has lingered on as a touchstone for developing computational approaches to moral reasoning (Gerdes and Øhrstrøm in J Inf Commun Ethics Soc 13(2):98–109, 2015). While these efforts have not led to the detailed development of an MTT, they nonetheless retain the idea to discuss what kinds of action and reasoning should be demanded of autonomous systems. We explore the flawed basis of an MTT in imitation, even one based on scenarios of morally accountable actions. MTT-based evaluations are vulnerable to deception, inadequate reasoning, and inferior moral performance vis a vis a system’s capabilities. We propose verification—which demands the design of transparent, accountable processes of reasoning that reliably prefigure the performance of autonomous systems—serves as a superior framework for both designer and system alike. As autonomous social robots in particular take on an increasing range of critical roles within society, we conclude that verification offers an essential, albeit challenging, moral measure of their design and performance.


Robot ethics Artificial moral agents Moral Turing test Verification Human–robot interaction 


  1. Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence, 12(3), 251–261.CrossRefzbMATHGoogle Scholar
  2. Allen, C., Wallach, W., & Smit, I. (2006). Why machine ethics? Intelligent Systems, IEEE, 21(4), 12–17.CrossRefGoogle Scholar
  3. Ball, P. (2015). The truth about the Turing Test. BBC.
  4. Bringsjord, S. (1992). What robots can and can’t be. Dordrecht: Kluwer Academic.CrossRefzbMATHGoogle Scholar
  5. Calo, R. (2015). Robotics and the lessons of cyberlaw. California Law Review, 103, 513.Google Scholar
  6. Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379.CrossRefGoogle Scholar
  7. Gerdes, A. (2015). The issue of moral consideration in robot ethics. SIGCAS Computers & Society, 45(3), 274.CrossRefGoogle Scholar
  8. Gerdes, A., & Øhrstrøm, P. (2013). Preliminary reflections on a moral Turing test. In Proceedings of ETHICOMP (pp. 167–174).Google Scholar
  9. Gerdes, A., & Øhrstrøm, P. (2015). Issues in robot ethics seen through the lens of a moral Turing test. Journal of Information, Communication and Ethics in Society, 13(2), 98–109.CrossRefGoogle Scholar
  10. Harnad, S. (1991). Other bodies, other minds: A machine incarnation of an old philosophical problem. Minds and Machines, 1(1), 43–54.MathSciNetGoogle Scholar
  11. Henig, R. (2015). Death by robot. New York Times.
  12. Hintikka, J. (1962). Cogito, ergo sum: Inference or performance? The Philosophical Review, LXXI, 3–32.CrossRefGoogle Scholar
  13. Johnson-Laird, P. N. (1988). The computer and the mind: An introduction to cognitive science. Cambridge: Harvard University Press.Google Scholar
  14. Kurzweil, R. (2005). The singularity is near: When humans transcend biology. Harmondsworth: Penguin.Google Scholar
  15. Lin, P. (2013) The ethics of autonomous cars. The Atlantic.
  16. Lin, P. (2015). We’re building superhuman robots. Will they be heroes, or villains? Washington Post.
  17. Malle, B. F., Scheutz, M., Arnold, T., Voiklis, J. T., & Cusimano, C. (2015). Sacrifice one for the good of many? People apply different. In Proceedings of 10th ACM/IEEE International Conference on HumanRobot Interaction.Google Scholar
  18. Millar, J. (2014) An ethical dilemma: When robot cars must kill,who should pick the victim? Robohub.
  19. Moor, J. H. (2001). The status and future of the Turing test. Minds and Machines, 11(1), 77–93.CrossRefzbMATHGoogle Scholar
  20. Open Roboethics Initiative (2014). My (autonomous) car, my safety: Results from our reader poll. Open Roboethics Initiative.
  21. Pagallo, U. (2013). The laws of robots: Crimes, contracts, and torts (Vol. 10). Berlin: Springer Science & Business Media.CrossRefGoogle Scholar
  22. Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Cambridge: Harvard University Press.CrossRefGoogle Scholar
  23. Scheutz, M., & Malle, B. F. (2014). “Think and do the right thing”—A plea for morally competent autonomous robots. In Ethics in Science, Technology and Engineering, 2014 IEEE International Symposium on (pp. 1-4). IEEE.Google Scholar
  24. Stahl, B. C. (2004). Information, ethics, and computers: The problem of autonomous moral agents. Minds and Machines, 14(1), 67–83.CrossRefGoogle Scholar
  25. Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433–460.MathSciNetCrossRefGoogle Scholar
  26. Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.Google Scholar
  27. Wilson, P. (1979). Utility-theoretic indexing. Journal of the American Society for Information Science, 30(3), 169–170.CrossRefGoogle Scholar
  28. Wilson, J. R., & Scheutz, M. (2015). A model of empathy to shape trolley problem moral judgements. In Affective Computing and Intelligent Interaction (ACII), 2015 International Conference on (pp. 112–118). IEEE.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2016

Authors and Affiliations

  1. 1.Department of Computer Science, Human–Robot Interaction LaboratoryTufts UniversityMedfordUSA

Personalised recommendations