Abstract
This paper argues against the moral Turing test (MTT) as a framework for evaluating the moral performance of autonomous systems. Though the term has been carefully introduced, considered, and cautioned about in previous discussions (Allen et al. in J Exp Theor Artif Intell 12(3):251–261, 2000; Allen and Wallach 2009), it has lingered on as a touchstone for developing computational approaches to moral reasoning (Gerdes and Øhrstrøm in J Inf Commun Ethics Soc 13(2):98–109, 2015). While these efforts have not led to the detailed development of an MTT, they nonetheless retain the idea to discuss what kinds of action and reasoning should be demanded of autonomous systems. We explore the flawed basis of an MTT in imitation, even one based on scenarios of morally accountable actions. MTT-based evaluations are vulnerable to deception, inadequate reasoning, and inferior moral performance vis a vis a system’s capabilities. We propose verification—which demands the design of transparent, accountable processes of reasoning that reliably prefigure the performance of autonomous systems—serves as a superior framework for both designer and system alike. As autonomous social robots in particular take on an increasing range of critical roles within society, we conclude that verification offers an essential, albeit challenging, moral measure of their design and performance.
Similar content being viewed by others
References
Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence, 12(3), 251–261.
Allen, C., Wallach, W., & Smit, I. (2006). Why machine ethics? Intelligent Systems, IEEE, 21(4), 12–17.
Ball, P. (2015). The truth about the Turing Test. BBC. http://www.bbc.com/future/story/20150724-the-problem-with-the-turing-test/.
Bringsjord, S. (1992). What robots can and can’t be. Dordrecht: Kluwer Academic.
Calo, R. (2015). Robotics and the lessons of cyberlaw. California Law Review, 103, 513.
Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379.
Gerdes, A. (2015). The issue of moral consideration in robot ethics. SIGCAS Computers & Society, 45(3), 274.
Gerdes, A., & Øhrstrøm, P. (2013). Preliminary reflections on a moral Turing test. In Proceedings of ETHICOMP (pp. 167–174).
Gerdes, A., & Øhrstrøm, P. (2015). Issues in robot ethics seen through the lens of a moral Turing test. Journal of Information, Communication and Ethics in Society, 13(2), 98–109.
Harnad, S. (1991). Other bodies, other minds: A machine incarnation of an old philosophical problem. Minds and Machines, 1(1), 43–54.
Henig, R. (2015). Death by robot. New York Times. www.nytimes.com/2015/01/11/magazine/death-by-robot.html.
Hintikka, J. (1962). Cogito, ergo sum: Inference or performance? The Philosophical Review, LXXI, 3–32.
Johnson-Laird, P. N. (1988). The computer and the mind: An introduction to cognitive science. Cambridge: Harvard University Press.
Kurzweil, R. (2005). The singularity is near: When humans transcend biology. Harmondsworth: Penguin.
Lin, P. (2013) The ethics of autonomous cars. The Atlantic. www.theatlantic.com/technology/archive/2013/10/theethics-of-autonomous-cars/280360/.
Lin, P. (2015). We’re building superhuman robots. Will they be heroes, or villains? Washington Post. www.washingtonpost.com/news/in-theory/wp/2015/11/02/were-building-superhuman-robots-will-they-be-heroes-or-villains/.
Malle, B. F., Scheutz, M., Arnold, T., Voiklis, J. T., & Cusimano, C. (2015). Sacrifice one for the good of many? People apply different. In Proceedings of 10th ACM/IEEE International Conference on Human–Robot Interaction.
Millar, J. (2014) An ethical dilemma: When robot cars must kill,who should pick the victim? Robohub. robohub.org/an-ethicaldilemma-when-robot-cars-must-kill-who-should-pick-thevictim/.
Moor, J. H. (2001). The status and future of the Turing test. Minds and Machines, 11(1), 77–93.
Open Roboethics Initiative (2014). My (autonomous) car, my safety: Results from our reader poll. Open Roboethics Initiative. http://www.openroboethics.org/results-my-autonomous-car-my-safety/.
Pagallo, U. (2013). The laws of robots: Crimes, contracts, and torts (Vol. 10). Berlin: Springer Science & Business Media.
Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Cambridge: Harvard University Press.
Sample, I., & Hern, A. (2014). The Guardian. www.theguardian.com/technology/2014/jun/09/scientists-disagree-over-whether-turing-test-has-been-passed/.
Scheutz, M., & Malle, B. F. (2014). “Think and do the right thing”—A plea for morally competent autonomous robots. In Ethics in Science, Technology and Engineering, 2014 IEEE International Symposium on (pp. 1-4). IEEE.
Stahl, B. C. (2004). Information, ethics, and computers: The problem of autonomous moral agents. Minds and Machines, 14(1), 67–83.
Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433–460.
Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.
Wilson, P. (1979). Utility-theoretic indexing. Journal of the American Society for Information Science, 30(3), 169–170.
Wilson, J. R., & Scheutz, M. (2015). A model of empathy to shape trolley problem moral judgements. In Affective Computing and Intelligent Interaction (ACII), 2015 International Conference on (pp. 112–118). IEEE.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Arnold, T., Scheutz, M. Against the moral Turing test: accountable design and the moral reasoning of autonomous systems. Ethics Inf Technol 18, 103–115 (2016). https://doi.org/10.1007/s10676-016-9389-x
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10676-016-9389-x