Skip to main content

Advertisement

Log in

The value of responsibility gaps in algorithmic decision-making

  • Original Paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

Many seem to think that AI-induced responsibility gaps are morally bad and therefore ought to be avoided. We argue, by contrast, that there is at least a pro tanto reason to welcome responsibility gaps. The central reason is that it can be bad for people to be responsible for wrongdoing. This, we argue, gives us one reason to prefer automated decision-making over human decision-making, especially in contexts where the risks of wrongdoing are high. While we are not the first to suggest that responsibility gaps should sometimes be welcomed, our argument is novel. Others have argued that responsibility gaps should sometimes be welcomed because they can reduce or eliminate the psychological burdens caused by tragic moral choice-situations. By contrast, our argument explains why responsibility gaps should sometimes be welcomed even in the absence of tragic moral choice-situations, and even in the absence of psychological burdens.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Throughout the paper, we shall talk about ‘AI systems’, but we take this to include things like simple rule-based systems, machine learning systems, deep learning systems, etc.

  2. See Kraaijeveld (2020); Pagallo (2011); Tigard (2021); Matthias (2004); Sparrow (2007); Rubel et al. (2019).

  3. A burgeoning literature discusses whether artificial, non-human agents can be held responsible under certain conditions (see for instance Sebastián 2021; List 2021). We set this discussion aside here, since even if automatons could aptly be held responsible, this would change nothing from the perspective of our argument.

  4. For examples of theorists who believe that replacing humans with AI systems can create responsibility gaps, see (Matthias, 2004; Sparrow, 2007; Danaher, 2016; De Jong, 2020; Kiener, 2022; Danaher, forthcoming).

  5. See (Goetze, 2022) for discussion of the tracing back strategy.

  6. See (Simpson and Müller 2016) for discussion of this response.

  7. See also (Kiener, 2022) and (Hanson, 2009) for further discussions of how to “bridge” responsibility gaps.

  8. It has been argued that AI systems that make “social decisions” like the one in Decision-Procedure Designer are often extremely erroneous (Raji et al., 2022). We are not concerned with extant AI systems, but with AI systems that work as described above. We should also mention that many have argued that there are excellent reasons not to replace human decision-makers with AI systems for reasons unrelated to responsibility gaps. Finally, we should mention that it has been argued that AI systems are often no better than basic statistical techniques (Narayanan, 2019), making it less clear why we should be particularly concerned with replacing human decision-makers with AI systems per se. However, we shall set such worries aside for the sake of argument. We thank an anonymous reviewer for suggesting that we make these things explicit. observations and limpoint this out.

  9. Of course, the system will also distribute undeserved benefits to some. But since it is harder to see that such cases are wrongings of any particular individuals, we will focus on the other type of errors here.

  10. There are several different reasons for why it can be important to know who is responsible for erroneous decisions. First, it can be important because we sometimes need to know who to punish for erroneous decisions. Second, it can be important because it can help us avoid erroneous decisions in the future. Third, it can be important when we need to provide redress for the “victims” of erroneous decisions (Goetze, 2022); (Gotterbarn, 2001); (Nissenbaum, 1994). Fourth, it can be important when we need to provide explanations to victims of why errors were made (Coeckelbergh, 2021). Thanks to an anonymous reviewer for suggesting that we highlight these different reasons.

  11. Himmelreich only intends this to pick out one type of responsibility gap, notice, so it is not meant as a complete account. See (Hindriks and Veluwenkamp 2023) for discussion of Himmelreich’s account and other ways of conceptualizing responsibility gaps. Hindriks and Veluwenkamp are skeptical of there being responsibility gaps, arguing that in the relevant range of cases, responsibility is always indirect or there is blameless harm (so there is no room for “gaps” in responsibility). As stated before, we remain neutral on the question of whether responsibility gaps exist, but notice that even if Hindriks and Veluwenkamp are right, our argument speaks to the desirability of cases of blameless harm.

  12. But see (Santoni de Sio and Mecacci 2021) who helpfully distinguish four interpretations of the term “responsibility gap”; see also (Goetze, 2022).

  13. There is a substantive question here about how “thick” the judgment that somebody is morally responsible for some outcome is. On the probably thinnest possible interpretation, A is morally responsible for some outcome O when A caused O. On a thicker notion, such as the one employed by Santoni de Sio and Mecassi (2021: 1062), responsibility for an outcome tracks blameworthiness (provided the outcome is one that warrants blame)—this they call “culpability”.

  14. For our purposes we can understand the idea of being “held responsible” broadly. It may include activities such as blaming, punishing or harming.

  15. This case is inspired by a similar case from (Tadros 2020).

  16. Thanks to an anonymous reviewer for pressing us to discuss this objection.

  17. See also (Sparrow 2007); (Danaher 2016); (Felder 2021).

  18. See (Tessman, 2017) for discussion of moral dilemmas.

  19. We thank an anonymous reviewer for asking us to elaborate on this point.

References

  • Alexander, L., & Ferzan, K. (2018). Reflections on crime and culpability: problems and puzzles. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Baum, K., Mantel, S., & Schmidt, E., and Timo Speith (2022). From responsibility to reason-giving explainable Artificial Intelligence. Philosophy & Technology, 35(1), 12.

    Article  Google Scholar 

  • Bjerring, J. C., & Busch, J. (2021). Artificial Intelligence and patient-centered decision-making. Philosophy & Technology, 34, 349–371.

    Article  Google Scholar 

  • Coeckelbergh, M. (2021). AI Ethics. MIT Press.

  • Danaher, J. (2016). Robots, Law and the Retribution gap. Ethics and Information Technology, 18(4), 299–309.

    Article  Google Scholar 

  • Danaher, J. Tragic Choices and the Virtue of Techno-Responsibility Gaps. Philosophy and Technology, forthcoming.

  • De Jong, R. (2020). The retribution-gap and responsibility-loci related to robots and automated technologies: a reply to Nyholm. Science and Engineering Ethics, 26(2), 727–735.

    Article  Google Scholar 

  • Dworkin, G. (2020). Paternalism. In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta, https://plato.stanford.edu/archives/fall2020/entries/paternalism/.

  • Feier, T., Gogoll, J., & Uhl, M. (2022). Hiding Behind Machines: Artificial Agents May Help to Evade Punishment.Science and Engineering Ethics,28.

  • Felder, R. (2021). Coming to terms with the Black Box Problem: how to justify AI Systems in Health Care. Hastings Center Report, 51(4), 38–45.

    Article  Google Scholar 

  • Fischer, J., & Tognazzini, N. A. (2009).The Truth about Tracing. Noûs, 43(3):531–56.

    Google Scholar 

  • Goetze, T. (2022). Mind the Gap: Autonomous Systems, the Responsibility Gap, and Moral Entanglement. FAccT ‘22

  • Gotterbarn, D. (2001). Informatics and professional responsibility. Science and Engineering Ethics, 7, 221–230.

    Article  Google Scholar 

  • Hanson, F. A. (2009). Beyond the skin bag: on the moral responsibility of extended agencies. Ethics and Information Technology, 11(1), 91–99.

    Article  Google Scholar 

  • Himmelreich, J. (2019). Responsibility for Killer Robots. Ethical Theory and Moral Practice, 22(3), 731–747.

    Article  Google Scholar 

  • Hindriks, F., & Veluwenkamp, H. (2023). The risks of autonomous machines: from responsibility gaps to control gaps. Synthese, 201, 21.

    Article  Google Scholar 

  • Kraaijeveld, S. R. (2020). Debunking (the) retribution (gap). Science and Engineering Ethics, 26, 1315–1328.

    Article  Google Scholar 

  • Kiener, M. (2022). Can we Bridge AI’s responsibility gap at Will? Ethical Theory and Moral Practice, 25, 575–593.

    Article  Google Scholar 

  • Königs, P. (2022). Artificial intelligence and responsibility gaps: what is the problem?Ethics and Information Technology, 24(36).

  • Langer, M., Cornelius, J., & König, and Andromachi Fitili (2018). Information as a double-edged Sword: the role of computer experience and information on applicant reactions towards Novel Technologies for Personnel Selection. Computers in Human Behavior, 81, 19–30.

    Article  Google Scholar 

  • Levinson, J., Askeland, J., Becker, J., Dolson, J., Held, D., Kammel, S., Zico, J., Kolter (2011). Towards Fully Autonomous Driving: Systems and Algorithms. IEEE Intelligent Vehicles Symposium (IV), 163–68.

  • List, C. (2021). Group Agency and Artificial Intelligence. Philosophy & Technology, 34, 1213–1242.

    Article  Google Scholar 

  • Matthias, A. (2004). The responsibility gap: ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6, 175–183.

    Article  Google Scholar 

  • Narayanan, A. (2019). How to Recognize AI Snake Oil. Arthur Miller lecture on science and ethics, Massachusetts Institute of Technology, http://www.cs.princeton.edu/~arvindn/talks.

  • Nissenbaum, H. (1994). Computing and accountability. Communications of the ACM, 37(1), 72–80.

    Article  Google Scholar 

  • Pagallo, U. (2011). Killers, fridges, and slaves: a legal journey in robotics. AI & Society, 26, 347–354.

    Article  Google Scholar 

  • Rubel, A., Castro, C., & Pham, A. (2019). Agency laundering and Information Technologies. Ethical Theory and Moral Practice, 22(4), 1017–1041.

    Article  Google Scholar 

  • Raji, I., Elizabeth Kumar, A., Horowitz, & Selbst, A. (2022). The Fallacy of AI Functionality. FAccT ‘22.

  • Santoni de Sio, F., & Mecacci, G. Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them.Philosophy & Technology, 34:1057–1084.

  • Sebastián, M. (2021). First-person representations and responsible Agency in AI. Synthese, 199(3), 7061–7079.

    Article  MathSciNet  Google Scholar 

  • Simpson, T., Vincent, C., & Müller. Just war and robot’s killings.The Philosophical Quarterly, 66(263):302–22.

  • Sparrow, R. (2007). Killer Robots. Journal of Applied Philosophy, 24(1), 62–77.

    Article  Google Scholar 

  • Tadros, V. (2020). Distributing responsibility. Philosophy & Public Affairs, 48(3), 223–261.

    Article  Google Scholar 

  • Tadros, V. (2011). The ends of harm: the Moral Foundations of Criminal Law. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Tessman, L. (2017). When doing the right thing is impossible. Oxford University Press.

  • Tigard, D. (2021). There is no techno-responsibility gap. Philosophy & Technology, 34, 589–607.

    Article  Google Scholar 

  • Topol, E. (2019). High-performance medicine: the convergence of human and Artificial Intelligence. Nature Medicine, 25(1), 44–56.

    Article  Google Scholar 

  • Walen, A. (2021). Retributive Justice, The Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.), https://plato.stanford.edu/archives/sum2021/entries/justice-retributive/.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jakob Mainz.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Munch, L., Mainz, J. & Bjerring, J.C. The value of responsibility gaps in algorithmic decision-making. Ethics Inf Technol 25, 21 (2023). https://doi.org/10.1007/s10676-023-09699-6

Download citation

  • Published:

  • DOI: https://doi.org/10.1007/s10676-023-09699-6

Keywords

Navigation