Against the moral Turing test: accountable design and the moral reasoning of autonomous systems
This paper argues against the moral Turing test (MTT) as a framework for evaluating the moral performance of autonomous systems. Though the term has been carefully introduced, considered, and cautioned about in previous discussions (Allen et al. in J Exp Theor Artif Intell 12(3):251–261, 2000; Allen and Wallach 2009), it has lingered on as a touchstone for developing computational approaches to moral reasoning (Gerdes and Øhrstrøm in J Inf Commun Ethics Soc 13(2):98–109, 2015). While these efforts have not led to the detailed development of an MTT, they nonetheless retain the idea to discuss what kinds of action and reasoning should be demanded of autonomous systems. We explore the flawed basis of an MTT in imitation, even one based on scenarios of morally accountable actions. MTT-based evaluations are vulnerable to deception, inadequate reasoning, and inferior moral performance vis a vis a system’s capabilities. We propose verification—which demands the design of transparent, accountable processes of reasoning that reliably prefigure the performance of autonomous systems—serves as a superior framework for both designer and system alike. As autonomous social robots in particular take on an increasing range of critical roles within society, we conclude that verification offers an essential, albeit challenging, moral measure of their design and performance.
KeywordsRobot ethics Artificial moral agents Moral Turing test Verification Human–robot interaction
- Ball, P. (2015). The truth about the Turing Test. BBC. http://www.bbc.com/future/story/20150724-the-problem-with-the-turing-test/.
- Calo, R. (2015). Robotics and the lessons of cyberlaw. California Law Review, 103, 513.Google Scholar
- Gerdes, A., & Øhrstrøm, P. (2013). Preliminary reflections on a moral Turing test. In Proceedings of ETHICOMP (pp. 167–174).Google Scholar
- Henig, R. (2015). Death by robot. New York Times. www.nytimes.com/2015/01/11/magazine/death-by-robot.html.
- Johnson-Laird, P. N. (1988). The computer and the mind: An introduction to cognitive science. Cambridge: Harvard University Press.Google Scholar
- Kurzweil, R. (2005). The singularity is near: When humans transcend biology. Harmondsworth: Penguin.Google Scholar
- Lin, P. (2013) The ethics of autonomous cars. The Atlantic. www.theatlantic.com/technology/archive/2013/10/theethics-of-autonomous-cars/280360/.
- Lin, P. (2015). We’re building superhuman robots. Will they be heroes, or villains? Washington Post. www.washingtonpost.com/news/in-theory/wp/2015/11/02/were-building-superhuman-robots-will-they-be-heroes-or-villains/.
- Malle, B. F., Scheutz, M., Arnold, T., Voiklis, J. T., & Cusimano, C. (2015). Sacrifice one for the good of many? People apply different. In Proceedings of 10th ACM/IEEE International Conference on Human–Robot Interaction.Google Scholar
- Millar, J. (2014) An ethical dilemma: When robot cars must kill,who should pick the victim? Robohub. robohub.org/an-ethicaldilemma-when-robot-cars-must-kill-who-should-pick-thevictim/.
- Open Roboethics Initiative (2014). My (autonomous) car, my safety: Results from our reader poll. Open Roboethics Initiative. http://www.openroboethics.org/results-my-autonomous-car-my-safety/.
- Sample, I., & Hern, A. (2014). The Guardian. www.theguardian.com/technology/2014/jun/09/scientists-disagree-over-whether-turing-test-has-been-passed/.
- Scheutz, M., & Malle, B. F. (2014). “Think and do the right thing”—A plea for morally competent autonomous robots. In Ethics in Science, Technology and Engineering, 2014 IEEE International Symposium on (pp. 1-4). IEEE.Google Scholar
- Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.Google Scholar
- Wilson, J. R., & Scheutz, M. (2015). A model of empathy to shape trolley problem moral judgements. In Affective Computing and Intelligent Interaction (ACII), 2015 International Conference on (pp. 112–118). IEEE.Google Scholar