Abstract
Within a few decades, autonomous robotic devices, computing machines, autonomous cars, drones and alike will be among us in numbers, forms and roles unimaginable only 20 or 30 years ago. How can we be sure that those machines will not under any circumstances harm us? We need a verification criterion: a test that would verify the autonomous machine’s aptitude to make “good” rather than “bad” decisions. This chapter discusses what such a test would consist of. We will call this test the ethical machine safety test or machine safety test (MST) for short. Making “good” or “bad” choices is associated with ethics. By analogy, an ability of the autonomous machines to make such choices is often interpreted as machine’s ethical ability, which is not strictly correct. The MST is not intended to prove that machines have reached the level of moral standing people have or reached the level of autonomy that endows them with “moral personality” and makes them responsible for what they do. The MST is intended to verify that autonomous machines are safe to be around us.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
We need to keep in mind that the future full of happiness and unalloyed human flourishing promised by light-minded AI and robotic enthusiasts is just an uncritical and hardly justified fairy tale fantasy. I propose to leave behind Start Trekfans, Asimov’s Three Laws of Robotics and other Sci-Fi phantasms. History does not justify such a vision at all (unfortunately!). Recall the cautionary words about progress offered by more discerning minds: “What the Enlightenment thinkers never envisioned was that irrationality would continue to flourish alongside rapid development in science and technology… In fact, [there is] no consistent link between the adoption of modern science and technology on the one hand and the progress of reason in human affairs on the other … There is nothing in the spread of new technologies that regularly leads to the adoption of what we like to think of as a modern, rational worldview” (Gray 2007, p. 18).
- 3.
“The development of machines with enough intelligence to assess the effects of their actions on sentient beings and act accordingly may ultimately be the most important task faced by the designers of artificially intelligent automata” (Allen et al. 2000). Seibt writes: “…we are currently in a situation of epistemic uncertainty where we still lack predictive knowledge about the individual and socio-cultural impact of the placement of social robots into human interactions space, and we are unclear on which aspects of human interactions with social robots lend themselves to predictive analysis” (Seibt 2012).
- 4.
For someone that cannot accept a concept of deep ethics, the more technical explanation of what ethics really entails may be easier to comprehend: “… given the complexity of human values, specifying a single desirable value is insufficient to guarantee an outcome positive for humans. Outcomes in which a single value is highly optimized while other values are neglected tend to be disastrous for humanity, as for example one in which a happiness-maximizer turns humans into passive recipients of an electrical feed into pleasure centers of the brain. For a positive outcome, it is necessary to define a goal system that takes into account the entire ensemble of human values simultaneously” (Yampolskiy and Fox 2012).
- 5.
It is critical to understand this difference. If we attribute ethics to machines we may be tempted to bestow on them personality, responsibility and similar (which unfortunately is slowly happening). But if we say that these machines have m-ethics, which is what they have, we will make such flights of fancy much more difficult.
- 6.
The objective of the Turing test (TT) was not to verify some specific kind of intelligence; it was aimed at a general intelligence. Thus, success in playing Chess or Go did not in fact prove or disprove a machine’s capacity to reason, according to the TT’s requirements.
- 7.
“Imaginary stories and thought experiments are often used in philosophy to clarify, exemplify, and provide evidence or counterevidence for abstract ideas and principles. Stories and thought experiments can illustrate abstract ideas and can test their credibility, or, at least, so it is claimed. As a by-product, stories and thought experiments bring literary, and even entertaining, elements into philosophy” (Lehtonen 2012).
- 8.
We want to avoid dilemmas as reported by Heron and Belfort (2015): “The question of who we should blame when a robot kills a human has recently become somewhat more pressing. The recent death (we talk about 2015) of a Volkswagen employee at the hand of an industrial factory robot has left ethicists and legislators unsure of where the moral and ethical responsibility for the death should lie—does it lie with the owners, the developers, the factory managers, or elsewhere?”
- 9.
“‘Ethics’, as understood in modernity, focuses on the rightness and wrongness of actions. The focus is misleading in that actions never occur outside of the wider social and natural contexts to which they respond. Individual, community, and society clearly constitute such contexts, on the different levels of the natural … ‘human’ world. This world comprises our interpersonal relationships as well as the natural givens” (McCumber 2007, p. 161).
- 10.
See, for example, the requirements for the testing standards for an airline pilot: https://www.faa.gov/ training_testing/testing/ test_standards/media/faa-s-8081-20.pdf
- 11.
“Michigan is also home to ‘M City,’ a 23-acre mini-city at the University of Michigan built for testing driverless car technology”. Available at: http://fortune.com/2017/01/20/self-driving-test-sites/
- 12.
“The carmaker’s autonomous vehicles traveled a total of 550 miles on California public roads in October and November 2016 and reported 182 ‘disengagements,’ or episodes when a human driver needs to take control to avoid an accident or respond to technical problems, according to a filing with the California Department of Motor Vehicles. That’s 0.33 disengagements per autonomous mile. Tesla reported that there were ‘no emergencies, accidents or collisions.’ Tesla’s report for 2015 specified that it didn’t have any disengagements to report” (Hall 2017).
- 13.
A few quotations substantiate this claim: “Microsoft likes to have everything glued together like a kindergarten art project gone berserk, but this is ridiculous” (Vaughan-Nichols 2014); “Microsoft Windows isn’t the only operating system for personal computers, or even the best … it’s just the best-distributed. Its inconsistent behavior and an interface that changes radically with every version are the main reasons people find computers difficult to use. Microsoft adds new bells and whistles in each release and claims that this time they’ve solved the countless problems in the previous versions … but the hype is never really fulfilled” (Anonymous, available at: http://alternatives.rzero.com/os.html [Accessed on 5/1/2017]).
References
Allen, C., Varner, G., Zinser, J.: Prolegomena to any future artificial moral agent. J. Exp. Theor. Artif. Intell. 12, 251–261 (2000)
Anderson, S.: The promise of ethical machines. https://www.project-syndicate.org/commentary/ethics-for-advanced-robots-by-susan-leigh-anderson-2016-12 (2016)
Anderson, M., Anderson, S.L.: Robot be good. Sci. Am. 10, 72–77 (2010)
Balch, O.: Driverless cars will make our roads safer, says Oxbotica co-founder. https://www.theguardian.com/sustainable-business/2017/apr/13/driverless-cars-will-make-our-roads-safer-says-oxbotica-co-founder (2017)
Beavers, A.F.: Is ethics computable. Presidential Address, Aarhus, Denmark, July 4. http://www.afbeavers.net/cv (2011)
Berg, A.: Revolution evolution. Finance Dev. (2016)
Bloem, J., van Doorn, M., Duivestein, S., Excoffier, D., van Maas, R. Ommeren, E.: Fourth industrial revolution. VINT research report. 3 of 4. https://slidelegend.com/queue/the-fourth-industrial-revolution-sogeti_59b503731723ddf2725f00c7.html (2014)
Bostrom, N.: Superintelligence. Oxford University Press, Oxford (2015)
Bourke, V.J.: History of Ethics, Vol. I, V.II. Axios Press, Mount Jackson, VA (2008)
Boyle, A.: AI prophets say robots could spark unemployment – and a revolution. Geekwire, February 13 (2016)
Brown, A.: YOUR job won’t exist in 20 years: Robots and AI to ‘eliminate’ ALL human workers by 2036. https://www.express.co.uk/life-style/science-technology/640744/Jobless-Future-Robots-Artificial-Intelligence-Vivek-Wadhwa (2016)
Cathcart, T.: The Trolley Problem. Workman Publishing, New York (2013)
Clifford. C: The robots will take our jobs. Here’s why futurist ray Kurzweil isn’t worried. Entrepreneur. https://www.entrepreneur.com/article/272212 (2017)
Cookson, C.: AI and robots threaten to unleash mass unemployment, scientists warn. Financial Times. February (2016)
Dancy, J.: The role of imaginary cases in ethics. Pac. Philos. Q. 66, 141–153 (1985)
Doris, J.M.: Lack of Character: Personality and Moral Behavior. Cambridge University Press, Cambridge (2002)
Elster, J.: How outlandish can imaginary cases be? J. Appl. Philos. 28(3), 2011 (2011)
Floreano, D., Godjecac, J., Martinoli, F., Nicoud, J.-D.: Design, control and applications of autonomous mobile robots. Swiss Federal Institute of Technology, Lausanne. https://infoscience.epfl.ch/record/63893/files/aias.pdf (1998)
Ford, M.: The Rise of Robots: Technology and the Threat of Jobless Future. Basic Books, New York (2015)
Gray, G.: Heresies Against Progress and Other Illusions. Granta Publications, London (2007)
Hall, D.: Tesla Is Testing Self-Driving Cars on California Roads. https://www.bloomberg.com/news/articles/2017-02-01/tesla-is-testing-self-driving-cars-on-california-roads (2017)
Hern A.: Google’s Waymo invites members of public to trial self-driving vehicles. https://www.theguardian.com/technology/2017/apr/25/google-self-driving-waymo-invites-members-public-trial-vehicles-phoenix-arizona (2017)
Heron, M., Belfort, P.: Fuzzy ethics: or how I learned to stop worrying and love the bot. SIGCAS Comput. Soc. 45(4), 13 (2015)
Kaczynski, T.: Industrial society and its future. http://editions-hache.com/essais/pdf/kaczynski2.pdf (1995)
Krzanowski, R. Mamak, K. Trombik, K., Gradzka, E.: Ethics computable, non-computable or nonsensical? In: Defense of Computing Machines. Machine Ethics and Machine Law Conference. Jagiellonian University, Cracow, Poland, 18–19 November 2016
Lehtonen, T.: Idealization and exemplification as tools of philosophy. E-logos. Electro. J. Philos. 16 (2012)
MacIntyre, A.: A Short History of Ethics. Notre Dame Press, Notre Dame (1998)
McCumber, J.: Reshaping Reason. Indiana University Press, Bloomington (2007)
Milgram, S.: Behavioral study of obedience. J. Abnorm. Soc. Psychol. 67(4), 371–378 (1963)
Moor, J.H.: The nature, importance and difficulty of machine ethics. IEEE Intell. Syst. 18–21 July/August 2006
Moor, J.H.: Four kinds of ethical robots. Philosophy Now. 72, 12–14 (2009)
Ni, R., Leug. J.: Safety and liability of autonomous vehicle technologies. https://groups.csail.mit.edu/mac/classes/6.805/student-papers/fall14-papers/Autonomous_Vehicle_Technologies.pdf (2016)
Oppy, G., Dove D.: The Turing Test. The Spring 2016 Edition of the Stanford Encyclopedia of Philosophy. http://plato.stanford.edu/archives/spr2016/entries/turing-test/ (2016)
Patrick, L., Bekey, G., Abney, K.: Autonomous Military Robotics: Risk, Ethics, Design. US Department of Navy, Office of Naval Research, Arlington (2008)
Pew Research Center. AI, robotics, and the future of jobs. http://www.pewinternet.org/2014/08/06/future-of-jobs/ (2014)
Sandel, M.J.: Justice: What’s the Right Thing to Do? Penguin Books, London (2010)
Saygin, A.P., Cycelki, I., Akman, V.: Turing test: 50 years after. Mind. Mach. 10, 463–518 (2000)
Schwab, K.: Why everyone must get ready for the 4th industrial revolution. http://www.forbes.com/sites/bernardmarr/2016/04/05/why-everyone-must-get-ready-for-4th-industrial-revolution/2/#a9fc30f40c8c (2016)
Schwab, K.: The Fourth Industrial Revolution. Crown Business, New York (2017)
Seibt, J.: “Integrative social robotics” - a new method paradigm to solve the description problem and the regulation problem? In: Frontiers in Artificial Intelligence and Applications. Volume 290: What Social Robots Can and Should Do. IOS Press, Amsterdam (2012)
Seibt, J.: Towards an ontology of simulated social interaction. In: Hakli, R., Seibt, J. (eds.) Sociality and Normativity for Robots. Studies in the Philosophy of Sociality, vol. 9. Springer, New York (2017)
Sparrow, R.: The Turing triage test. Ethics Inf. Technol. 6(4), 203–213 (2004)
Sparrow, R.: The Turing Triage Test. When is a robot worthy of moral respect? http://www.thecritique.com/articles/the-turing-triage-test-when-is-a-robot-worthy-of-moral-respect/ (2014)
Stanford Prison Experiment. https://www.prisonexp.org (2014)
Stillgoe, J.: Self-driving cars will only work when we accept autonomy is a myth. https://www.theguardian.com/science/political-science/2017/apr/07/autonomous-vehicles-will-only-work-when-they-stop-pretending-to-be-autonomous (2017a)
Stillgoe, J.: Machine learning, social learning and the governance of self-driving cars. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2937316 (2017b)
Sullins, J.: Introduction: open questions in roboethics. Philos. Technol. 24, 233 (2011)
Szabó, G.T.: Thought Experiment: On the Powers and Limits of Imaginary Cases. Routledge, New York (2000)
Turing, A.M.: Computing machinery and intelligence. Mind. 49, 433–460 (1950)
Turner, R.: The Philosophy of Computer Science. The Winter 2016 Edition of the Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/computer-science/ (2016)
Vaughan-Nichols, S.J.: At Microsoft, quality seems to be job none. Computerword. 16 December 2014
Vardy, P., Grosch, P.: The Puzzle of Ethics. Fount, London (1999)
Veach, H.B.: Rational Man. Indiana University Press, London (1973)
Yampolskiy, R.V.: Leakproofing singularity - artificial intelligence confinement problem. J. Conscious. Stud. (JCS). 19(1–2), 194 (2012a)
Yampolskiy, R.V.: Artificial intelligence safety engineering: why machine ethics is a wrong approach. In: Müller, V.C. (ed.) Philosophy and Theory of Artificial Intelligence, SAPERE, vol. 5, pp. 389–396. Springer, New York (2012b)
Yampolskiy, R.V., Fox, J.: Safety engineering for artificial general intelligence. Topoi. https://intelligence.org/files/SafetyEngineering.pdf (2012)
Acknowledgments
We would like to thank Prof. Pawel Polak for his constructive comments on the early draft of this paper. All the errors, faulty conclusions and logical and factual mistakes are of course of our doing.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Krzanowski, R.M., Trombik, K. (2021). Ethical Machine Safety Test. In: Hofkirchner, W., Kreowski, HJ. (eds) Transhumanism: The Proper Guide to a Posthuman Condition or a Dangerous Idea?. Cognitive Technologies. Springer, Cham. https://doi.org/10.1007/978-3-030-56546-6_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-56546-6_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-56545-9
Online ISBN: 978-3-030-56546-6
eBook Packages: Computer ScienceComputer Science (R0)