Skip to main content

Part of the book series: Cognitive Technologies ((COGTECH))

  • 1344 Accesses

Abstract

Within a few decades, autonomous robotic devices, computing machines, autonomous cars, drones and alike will be among us in numbers, forms and roles unimaginable only 20 or 30 years ago. How can we be sure that those machines will not under any circumstances harm us? We need a verification criterion: a test that would verify the autonomous machine’s aptitude to make “good” rather than “bad” decisions. This chapter discusses what such a test would consist of. We will call this test the ethical machine safety test or machine safety test (MST) for short. Making “good” or “bad” choices is associated with ethics. By analogy, an ability of the autonomous machines to make such choices is often interpreted as machine’s ethical ability, which is not strictly correct. The MST is not intended to prove that machines have reached the level of moral standing people have or reached the level of autonomy that endows them with “moral personality” and makes them responsible for what they do. The MST is intended to verify that autonomous machines are safe to be around us.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 179.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    “Ethical machines would pose no threat to humanity. On the contrary, they would help us considerably, not just by working for us, but also by showing us how we need to behave if we are to survive as a species” (Anderson and Anderson 2010; Bostrom 2015; Anderson 2016). See also Schwab (2016, 2017).

  2. 2.

    We need to keep in mind that the future full of happiness and unalloyed human flourishing promised by light-minded AI and robotic enthusiasts is just an uncritical and hardly justified fairy tale fantasy. I propose to leave behind Start Trekfans, Asimov’s Three Laws of Robotics and other Sci-Fi phantasms. History does not justify such a vision at all (unfortunately!). Recall the cautionary words about progress offered by more discerning minds: “What the Enlightenment thinkers never envisioned was that irrationality would continue to flourish alongside rapid development in science and technology… In fact, [there is] no consistent link between the adoption of modern science and technology on the one hand and the progress of reason in human affairs on the other … There is nothing in the spread of new technologies that regularly leads to the adoption of what we like to think of as a modern, rational worldview” (Gray 2007, p. 18).

  3. 3.

    “The development of machines with enough intelligence to assess the effects of their actions on sentient beings and act accordingly may ultimately be the most important task faced by the designers of artificially intelligent automata” (Allen et al. 2000). Seibt writes: “…we are currently in a situation of epistemic uncertainty where we still lack predictive knowledge about the individual and socio-cultural impact of the placement of social robots into human interactions space, and we are unclear on which aspects of human interactions with social robots lend themselves to predictive analysis” (Seibt 2012).

  4. 4.

    For someone that cannot accept a concept of deep ethics, the more technical explanation of what ethics really entails may be easier to comprehend: “… given the complexity of human values, specifying a single desirable value is insufficient to guarantee an outcome positive for humans. Outcomes in which a single value is highly optimized while other values are neglected tend to be disastrous for humanity, as for example one in which a happiness-maximizer turns humans into passive recipients of an electrical feed into pleasure centers of the brain. For a positive outcome, it is necessary to define a goal system that takes into account the entire ensemble of human values simultaneously” (Yampolskiy and Fox 2012).

  5. 5.

    It is critical to understand this difference. If we attribute ethics to machines we may be tempted to bestow on them personality, responsibility and similar (which unfortunately is slowly happening). But if we say that these machines have m-ethics, which is what they have, we will make such flights of fancy much more difficult.

  6. 6.

    The objective of the Turing test (TT) was not to verify some specific kind of intelligence; it was aimed at a general intelligence. Thus, success in playing Chess or Go did not in fact prove or disprove a machine’s capacity to reason, according to the TT’s requirements.

  7. 7.

    “Imaginary stories and thought experiments are often used in philosophy to clarify, exemplify, and provide evidence or counterevidence for abstract ideas and principles. Stories and thought experiments can illustrate abstract ideas and can test their credibility, or, at least, so it is claimed. As a by-product, stories and thought experiments bring literary, and even entertaining, elements into philosophy” (Lehtonen 2012).

  8. 8.

    We want to avoid dilemmas as reported by Heron and Belfort (2015): “The question of who we should blame when a robot kills a human has recently become somewhat more pressing. The recent death (we talk about 2015) of a Volkswagen employee at the hand of an industrial factory robot has left ethicists and legislators unsure of where the moral and ethical responsibility for the death should lie—does it lie with the owners, the developers, the factory managers, or elsewhere?”

  9. 9.

    “‘Ethics’, as understood in modernity, focuses on the rightness and wrongness of actions. The focus is misleading in that actions never occur outside of the wider social and natural contexts to which they respond. Individual, community, and society clearly constitute such contexts, on the different levels of the natural … ‘human’ world. This world comprises our interpersonal relationships as well as the natural givens” (McCumber 2007, p. 161).

  10. 10.

    See, for example, the requirements for the testing standards for an airline pilot: https://www.faa.gov/ training_testing/testing/ test_standards/media/faa-s-8081-20.pdf

  11. 11.

    “Michigan is also home to ‘M City,’ a 23-acre mini-city at the University of Michigan built for testing driverless car technology”. Available at: http://fortune.com/2017/01/20/self-driving-test-sites/

  12. 12.

    “The carmaker’s autonomous vehicles traveled a total of 550 miles on California public roads in October and November 2016 and reported 182 ‘disengagements,’ or episodes when a human driver needs to take control to avoid an accident or respond to technical problems, according to a filing with the California Department of Motor Vehicles. That’s 0.33 disengagements per autonomous mile. Tesla reported that there were ‘no emergencies, accidents or collisions.’ Tesla’s report for 2015 specified that it didn’t have any disengagements to report” (Hall 2017).

  13. 13.

    A few quotations substantiate this claim: “Microsoft likes to have everything glued together like a kindergarten art project gone berserk, but this is ridiculous” (Vaughan-Nichols 2014); “Microsoft Windows isn’t the only operating system for personal computers, or even the best … it’s just the best-distributed. Its inconsistent behavior and an interface that changes radically with every version are the main reasons people find computers difficult to use. Microsoft adds new bells and whistles in each release and claims that this time they’ve solved the countless problems in the previous versions … but the hype is never really fulfilled” (Anonymous, available at: http://alternatives.rzero.com/os.html [Accessed on 5/1/2017]).

References

Download references

Acknowledgments

We would like to thank Prof. Pawel Polak for his constructive comments on the early draft of this paper. All the errors, faulty conclusions and logical and factual mistakes are of course of our doing.

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Krzanowski, R.M., Trombik, K. (2021). Ethical Machine Safety Test. In: Hofkirchner, W., Kreowski, HJ. (eds) Transhumanism: The Proper Guide to a Posthuman Condition or a Dangerous Idea?. Cognitive Technologies. Springer, Cham. https://doi.org/10.1007/978-3-030-56546-6_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-56546-6_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-56545-9

  • Online ISBN: 978-3-030-56546-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics