Skip to main content
Log in

Can AI-Based Decisions be Genuinely Public? On the Limits of Using AI-Algorithms in Public Institutions

  • Original Article
  • Published:
Jus Cogens Aims and scope Submit manuscript

Abstract

AI-based algorithms are used extensively by public institutions. Thus, for instance, AI algorithms have been used in making decisions concerning punishment providing welfare payments, making decisions concerning parole, and many other tasks which have traditionally been assigned to public officials and/or public entities. We develop a novel argument against the use of AI algorithms, in particular with respect to decisions made by public officials and public entities. We argue that decisions made by AI algorithms cannot count as public decisions, namely decisions that are made in the name of citizens and that this fact should be taken into consideration when utilizing AI to replace public officials.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data Availability

No data was generated or analyzed.

Notes

  1. On February 5, 2020, the District Court of The Hague held that the System Risk Indication (SyRI) algorithm system, a legal instrument that the Dutch government used to detect fraud in areas such as benefits, allowances, and taxes, violated article 8 of the European Convention on Human Rights (ECHR) (right to respect for private and family life). The system combined several governmental databases to detect suspicious patterns without being transparent to the citizens involved and without asking them for consent. There are many choices that need to be reviewed before such a system can be put into operation, but the main reason why the Dutch court ruled it illegal was the lack of transparency as to how the system reached its conclusion.

  2. One concern is that circumstances change or are unstable, and hence, the decisions made by the AI algorithm are outdated. See Fagan and Levmore 2019. A second primary concern in AI systems is the problem of bias, See Heilweil (2020).

    Ntoutsi et al. (2020).

    See also Borgesius 2018.

  3. For a brief explanation of what neural networks are and the way in which they operate:

    “The human brain is the inspiration behind neural network architecture. Human brain cells, called neurons, form a complex, highly interconnected network and send electrical signals to each other to help humans process information. Similarly, an artificial neural network is made of artificial neurons that work together to solve a problem. Artificial neurons are software modules, called nodes, and artificial neural networks are software programs or algorithms that, at their core, use computing systems to solve mathematical calculations.”

  4. An example of database deficiencies leading to grave results can be found in their image recognition software accidentally cataloging black minorities as gorillas. Google was unable to fix this problem directly and circumvented the issues by canceling the “gorilla” category instead. See: https://www.theguardian.com/technology/2018/jan/12/google-racism-ban-gorilla-black-people (accessed 2 November 2023).

  5. EU Regulation 2018/1725.

  6. Examples can be found in US mortgage rates. See: A.I. Bias Caused 80% Of Black Mortgage Applicants To Be Denied—Culture Banx. https://www.culturebanx.com/cbx-daily/a-i-bias-caused-80-of-black-mortgage-applicants-to-be-denied/ (accessed 2 November 2023).

  7. For example see, Transparency and Open Government | whitehouse.gov, https://obamawhitehouse.archives.gov/the-press-office/transparency-and-open-government.

  8. As AI experts often maintain to achieve accountability, full or complete transparency is not necessary. Instead “what society needs are transparency policies that are thoughtfully contextualized to specific decision domains…” ibid. Regarding transparency as a means to accountability also determines the optimal scope of transparency. Transparency is required only when, and to the extent, that it serves the goal of accountability. Transparency is designed to remedy defects in the system and the scope of the required transparency is designed to facilitate a remedial function, namely to facilitate the preventing or remedying of defects in the decision-making process.

  9. Gal 2018, 83. The importance of engaging in the act of “choosing” is emphasized by Michal Gal who claims that “This argument likens our decision-making capacity to a muscle that needs to be exercised in order to stay in shape.”.

  10. For a criticism of this argument, see Duus-Otterström and Poama (2023) in this special issue.

  11. The Academic Center for Law and Business v Minister of Finance (2009) HCJ, The Human Rights Division, 2605/05 An English translation is available at: https://versa.cardozo.yu.edu/sites/default/files/upload/opinions/Academic%20Center%20of%20Law%20and%20Business%20v.%20Minister%20of%20Finance.pdf.

  12. Weber 1994. This qualitative difference between public officials and private individuals underlies Max Weber’s familiar observation that the public official “takes pride in … overcoming his own inclinations and opinions, so as to execute in a conscientious and meaningful way what is required of him … even—and particularly—when they do not coincide with his political views.”.

  13. See the story by Isaac Assimov, Franchise. In this story, the USA has converted to an “electronic democracy” where the computer Multivac selects a single person to answer a number of questions. Multivac will then use the answers and other data to determine what the results of an election would be, avoiding the need for an actual election to be held.

References

Download references

Author information

Authors and Affiliations

Authors

Contributions

Co-authored.

Corresponding author

Correspondence to Alon Harel.

Ethics declarations

Ethics Approval

Not applicable.

Consent to Participate

Not applicable.

Research Involving Human Participants and/or Animals

Not applicable.

Competing Interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Harel, A., Perl, G. Can AI-Based Decisions be Genuinely Public? On the Limits of Using AI-Algorithms in Public Institutions. Jus Cogens 6, 47–64 (2024). https://doi.org/10.1007/s42439-023-00088-7

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s42439-023-00088-7

Keywords

Navigation