Skip to main content

Would Moral Machines Close the Responsibility Gap?

  • Chapter
  • First Online:
Technology, Anthropology, and Dimensions of Responsibility

Part of the book series: Techno:Phil – Aktuelle Herausforderungen der Technikphilosophie ((TPAHT,volume 1))

Abstract

Questions about moral machines are ever-present in contemporary discourse about robots and artificial intelligence. What would a machine that acts morally be like? Is it actually possible to build one? Should we work toward this goal and why? Would observable moral behavior be sufficient for a machine to count as a moral being, or would it need some ‘subjective’ foundation in its inner workings? In other words: Would it be enough for a ‘moral machine’ to behave as if it were sensitive to our moral affairs, or would it have to be able to sense and understand our moral concerns? The present paper focuses on a few selected aspects connecting moral machines to the so-called responsibility gap. By introducing a number of clarifications and a certain perspective on the issues at hand, the question of whether a technological issue should be addressed by approximating machines to humans is answered in the negative. A reflection and debate on these and further similar issues would, therefore, have to shift its focus from the isolated technological artifacts to the much wider contexts of application and socio-technological interactions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    From here on, I will use the term ‘artificial agent’ as a general term for robots and artificially intelligent artifacts.

  2. 2.

    For a lucid and exhaustive summary of the debate see (Noorman 2018).

  3. 3.

    Admittedly, this wording of a necessary condition for responsibility is oversimplified. For a much more thorough account, see (Poel 2015) and (Sombetzki 2014).

  4. 4.

    Of course, as mentioned earlier, there are many settings where personal supervision and control would eliminate the advantages of artificial agents in the first place. For these settings, option 2 or 3 is obviously preferable.

  5. 5.

    The exception would be terrorists who do not care about anyone’s (including their own) benefit when employing dangerous artificial agents. But even the military has no interest in using systems for destruction that it cannot control in any of the described ways, as noted by Noorman and Johnson 2014.

References

  • Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. In K. Frankish & W. M. Ramsey (Eds.), The cambridge handbook of artificial intelligence (pp. 316–334). Cambridge: Cambridge University Press.

    Chapter  Google Scholar 

  • Bryson, J. J. (2018). Patiency is not a virtue: The design of intelligent systems and systems of ethics. Ethics and Information Technology, 20(1), 15–26. https://doi.org/10.1007/s10676-018-9448-6.

    Article  Google Scholar 

  • Christaller, T., Decker, M., Gilsbach, J.-M., Hirzinger, G., Lauterbach, K., Schweighofer, E., et al. (2001). Robotik – Perspektiven für menschliches Handeln in der zukünftigen Gesellschaft. Berlin: Springer.

    Google Scholar 

  • Christaller, T. (Ed.). (2003). Autonome Maschinen. Wiesbaden: Westdt. Verl.

    Google Scholar 

  • Darling, K. (2017). ‘Who’s Johnny?’ Anthropomorphic framing in human-robot interaction, integration, and policy. In P. Lin, K. Abney, & R. Jenkins (Eds.), Robot Ethics 2.0. New York: Oxford University Press.

    Google Scholar 

  • Dennett, D. C. (1987). The intentional stance. Cambridge: MIT Press.

    Google Scholar 

  • Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machine, 14, 349–379.

    Article  Google Scholar 

  • Grunwald, A. (2012). Can robots plan, and what does the answer to this question mean? In M. Decker & M. Gutmann (Eds.), Robo- and Informationethics. Some Fundamentals (pp. 189–209). Berlin: LIT.

    Google Scholar 

  • Johnson, D. G., & Noorman, M. (2014). Artefactual agency and artefactual moral agency. In P. Kroes & P.-P. Verbeek (Eds.), The moral status of technical artefacts (pp. 143–158). Dordrecht: Springer.

    Chapter  Google Scholar 

  • Kroes, P., & Verbeek, P.-P. (Eds.). (2014). The moral status of technical artefacts. Dordrecht: Springer.

    Google Scholar 

  • Sombetzki (now Loh), J. (2014). Verantwortung als Begriff, Fähigkeit, Aufgabe: Eine Drei-Ebenen-Analyse. Wiesbaden: Springer VS.

    Google Scholar 

  • Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183. https://doi.org/10.1007/s10676-004-3422-1.

    Article  Google Scholar 

  • Müller, M. F. (2014). Von vermenschlichten Maschinen und maschinisierten Menschen. In S. Brändli, R. Harasgama, R. Schister, & A. Tamò (Eds.), Mensch und Maschine – Symbiose oder Parasitismus? (pp. 125–142). Bern: Stämpfli.

    Google Scholar 

  • Neuhäuser, C. (2014). Roboter und moralische Verantwortung. In E. Hilgendorf (Ed.), Robotik im Kontext von Recht und Moral (Vol. 3, pp. 269–286). Baden-Baden: Nomos.

    Google Scholar 

  • Noorman, M., & Johnson, D. G. (2014). Negotiating autonomy and responsibility in military robots. Ethics and Information Technology, 16(1), 51–62. https://doi.org/10.1007/s10676-013-9335-0.

    Article  Google Scholar 

  • Noorman, M. (2018). Computing and moral responsibility. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/spr2018/entries/computing-responsibility/.

  • Porter, Z., Habli, I., Monkhouse, H., & Bragg, J. (2018). The Moral Responsibility Gap and the Increasing Autonomy of Systems. First International Workshop on Artificial Intelligence Safety Engineering, White Rose Research Online. http://eprints.whiterose.ac.uk/133488/.

  • Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., van den Driessche, G., … Hassabis, D. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529, 484–489. https://doi.org/10.1038/nature16961.

    Article  Google Scholar 

  • van de Poel, I. (2015). Moral responsibility. In I. van de Poel, L. Royakkers, & S. D. Zwart (Eds.), Moral responsibility and the problem of many hands (pp. 12–49). New York: Routledge.

    Chapter  Google Scholar 

  • von Wright, G. H. (1967). The logic of action – A sketch. In N. Rescher (Ed.), The logic of decision and action (pp. 121–139). Pittsburgh: University of Pittsburgh Press.

    Google Scholar 

  • Wallach, W., & Allen, C. (2009). Moral machines: teaching robots right from wrong. New York: Oxford University Press.

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peter Remmers .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer-Verlag GmbH Germany, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Remmers, P. (2020). Would Moral Machines Close the Responsibility Gap?. In: Beck, B., Kühler, M. (eds) Technology, Anthropology, and Dimensions of Responsibility. Techno:Phil – Aktuelle Herausforderungen der Technikphilosophie , vol 1. J.B. Metzler, Stuttgart. https://doi.org/10.1007/978-3-476-04896-7_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-476-04896-7_10

  • Published:

  • Publisher Name: J.B. Metzler, Stuttgart

  • Print ISBN: 978-3-476-04895-0

  • Online ISBN: 978-3-476-04896-7

  • eBook Packages: J.B. Metzler Humanities (German Language)

Publish with us

Policies and ethics