Abstract
AI-based algorithms are used extensively by public institutions. Thus, for instance, AI algorithms have been used in making decisions concerning punishment providing welfare payments, making decisions concerning parole, and many other tasks which have traditionally been assigned to public officials and/or public entities. We develop a novel argument against the use of AI algorithms, in particular with respect to decisions made by public officials and public entities. We argue that decisions made by AI algorithms cannot count as public decisions, namely decisions that are made in the name of citizens and that this fact should be taken into consideration when utilizing AI to replace public officials.
Similar content being viewed by others
Data Availability
No data was generated or analyzed.
Notes
On February 5, 2020, the District Court of The Hague held that the System Risk Indication (SyRI) algorithm system, a legal instrument that the Dutch government used to detect fraud in areas such as benefits, allowances, and taxes, violated article 8 of the European Convention on Human Rights (ECHR) (right to respect for private and family life). The system combined several governmental databases to detect suspicious patterns without being transparent to the citizens involved and without asking them for consent. There are many choices that need to be reviewed before such a system can be put into operation, but the main reason why the Dutch court ruled it illegal was the lack of transparency as to how the system reached its conclusion.
For a brief explanation of what neural networks are and the way in which they operate:
“The human brain is the inspiration behind neural network architecture. Human brain cells, called neurons, form a complex, highly interconnected network and send electrical signals to each other to help humans process information. Similarly, an artificial neural network is made of artificial neurons that work together to solve a problem. Artificial neurons are software modules, called nodes, and artificial neural networks are software programs or algorithms that, at their core, use computing systems to solve mathematical calculations.”
An example of database deficiencies leading to grave results can be found in their image recognition software accidentally cataloging black minorities as gorillas. Google was unable to fix this problem directly and circumvented the issues by canceling the “gorilla” category instead. See: https://www.theguardian.com/technology/2018/jan/12/google-racism-ban-gorilla-black-people (accessed 2 November 2023).
EU Regulation 2018/1725.
Examples can be found in US mortgage rates. See: A.I. Bias Caused 80% Of Black Mortgage Applicants To Be Denied—Culture Banx. https://www.culturebanx.com/cbx-daily/a-i-bias-caused-80-of-black-mortgage-applicants-to-be-denied/ (accessed 2 November 2023).
For example see, Transparency and Open Government | whitehouse.gov, https://obamawhitehouse.archives.gov/the-press-office/transparency-and-open-government.
As AI experts often maintain to achieve accountability, full or complete transparency is not necessary. Instead “what society needs are transparency policies that are thoughtfully contextualized to specific decision domains…” ibid. Regarding transparency as a means to accountability also determines the optimal scope of transparency. Transparency is required only when, and to the extent, that it serves the goal of accountability. Transparency is designed to remedy defects in the system and the scope of the required transparency is designed to facilitate a remedial function, namely to facilitate the preventing or remedying of defects in the decision-making process.
Gal 2018, 83. The importance of engaging in the act of “choosing” is emphasized by Michal Gal who claims that “This argument likens our decision-making capacity to a muscle that needs to be exercised in order to stay in shape.”.
For a criticism of this argument, see Duus-Otterström and Poama (2023) in this special issue.
The Academic Center for Law and Business v Minister of Finance (2009) HCJ, The Human Rights Division, 2605/05 An English translation is available at: https://versa.cardozo.yu.edu/sites/default/files/upload/opinions/Academic%20Center%20of%20Law%20and%20Business%20v.%20Minister%20of%20Finance.pdf.
Weber 1994. This qualitative difference between public officials and private individuals underlies Max Weber’s familiar observation that the public official “takes pride in … overcoming his own inclinations and opinions, so as to execute in a conscientious and meaningful way what is required of him … even—and particularly—when they do not coincide with his political views.”.
See the story by Isaac Assimov, Franchise. In this story, the USA has converted to an “electronic democracy” where the computer Multivac selects a single person to answer a number of questions. Multivac will then use the answers and other data to determine what the results of an election would be, avoiding the need for an actual election to be held.
References
Abbott R (2020) The Reasonable Robot. Cambridge University Press
Academic Center for Law and Business v. Minister of Finance (2009) HCJ 2605/05 Human Rights Division.para21. https://versa.cardozo.yu.edu/sites/default/files/upload/opinions/Academic%20Center%20of%20Law%20and%20Business%20v.%20Minister%20of%20Finance.pdf. Accessed 2 Nov 2023
Androutsopoulou A et al (2019) Transforming the communication between citizens and government through AI-guided chatbots In: Government Information Quarterly, pp 358–367. https://doi.org/10.1016/j.giq.2018.10.001
Bamberger K (2010) Technologies of compliance: Risk and regulation in a digital age. Tex Law Rev 88 669–731. https://ssrn.com/abstract=1463727
Berk R (2017) An impact assessment of machine learning risk forecasts on parole board decisions and recidivism. J Exp Criminol 13:193–216. https://doi.org/10.1007/s11292-017-9286-2
Borgesius Z (2018) Discrimination, artificial intelligence, and algorithmic decision-making Directorate Center of Democracy, Council of Europe. https://rm.coe.int/discrimination-artificial-intelligence-and-algorithmic-decision-making/1680925d73
Brown RD (2021) Property ownership and the legal personhood of artificial intelligence. Inf Commun Technol Law 30(2):208–234. https://doi.org/10.1080/13600834.2020.1861714
Carney T (2020) Automation in social security: implications for merit review. Aust J Soc Issues 55(3):260–274. https://doi.org/10.1002/ajs4.95
Čerka P et al (2017) Is it possible to grant legal personality to artificial intelligence software systems? Comput Law Secur Rev 33(5):685–699. https://doi.org/10.1016/j.clsr.2017.03.022
Davis J (2019) Artificial wisdom? A potential limit on AI in law (and elsewhere). Oklahoma Law Rev 72(1). https://doi.org/10.2139/ssrn.3350600
De Bruijn H et al (2022) The perils and pitfalls of explainable AI: strategies for explaining algorithmic decision-making. Gov Inf Q 39(2). https://doi.org/10.1016/j.giq.2021.101666
DeCamp M, Lindvall C (2020) Latent bias and the implementation of artificial intelligence in medicine. J Am Med Inform Assoc 27(12):2020–2023. https://doi.org/10.1093/jamia/ocaa094
De Sousa W et al (2019) How and where AI in the public sector is going: a literature review and research agenda. Gov Inf Q 36(4). https://doi.org/10.1016/j.giq.2019.07.004
Diakopoulos N (2020) Transparency. Chapter 10. Oxford Handbook of Ethics of AI. Oxford University Press
Dorfman A, Harel A (2013) The case against privatization. Philos Public Aff 41(1):67–102. https://doi.org/10.1111/papa.12007
Dorfman A, Harel A (2016) Against privatization as such. Oxf J Leg Stud 36:400–427
Dorfman A, Harel A (2021) Law as standing. In: Gardner J et al (eds) Oxford Studies in Philosophy of Law, vol 4, pp 93–123 Oxford Academic. https://doi.org/10.1093/oso/9780192848871.003.0004
Fagan F, Levmore S (2019) The impact of artificial intelligence on rules, standards, and judicial discretion 15. South Calif Law Rev 93(1):367–93
Fazelpour S, Danks D (2021) Algorithmic bias: senses, sources, solutions. Philos Compass 16(8). https://doi.org/10.1111/phc3.12760
Gal M (2018) Algorithmic challenges to autonomous choice. Mich Technol Law Rev 25(1):60–104. https://doi.org/10.2139/ssrn.2971456
Gal S, Elkin-Koren N (2017) Algorithmic consumers. Harv J Law Technol 30(2):309–353
Gillis T (2020) False dreams of algorithmic fairness: the case of credit pricing. https://projects.iq.harvard.edu/fintechlaw/publications/false-dreams-algorithmic-fairness-case-credit-pricing
Glaze K et al (2022) Artificial intelligence for adjudication: the social security administration and AI governance. In: Bullock J et al. (eds) Handbook on AI Governance. Oxford Academic. https://doi.org/10.1093/oxfordhb/9780197579329.013.46
Harlow C, Rawlings R (2020) Proceduralism and automation: challenges to the values of administrative law. In: Fisher E et al. (eds), The Foundations and Future of Public Law: Essays in Honour of Paul Craig, pp 275–298. Oxford Academic. https://doi.org/10.1093/oso/9780198845249.003.0014
Heilweil R (2020) Why algorithms can be racist and sexist: a computer can make a decision faster: that doesn't make it fair. https://www.vox.com/recode/2020/2/18/21121286/algorithms-bias-discrimination-facial-recognition-transparency
Jensen B et al (2020) Algorithms at war: the promise, peril, and limits of artificial intelligence. Int Stud Rev 22(3):526–550. https://doi.org/10.1093/isr/viz025
Johnson D, Vericcio M (2019) AI Agency and responsibility: the VW fraud case and beyond. AI Soc 34(3):639–647. https://doi.org/10.1007/s00146-017-0781-9
Kamath U, Liu J (2021) Explainable artificial intelligence: an introduction to interpretable machine learning. Springer. https://doi.org/10.1007/978-3-030-83356-5
Lang M (2021) Reviewing algorithmic decision-making in administrative law 26–2. Lex Electron 195. https://www.canlii.org/en/commentary/doc/2021CanLIIDocs2276#!fragment//BQCwhgziBcwMYgK4DsDWszIQewE4BUBTADwBdoByCgSgBpltTCIBFRQ3AT0otokLC4EbDtyp8BQkAGU8pAELcASgFEAMioBqAQQByAYRW1SYAEbRS2ON
Le J (2018) A gentle introduction to neural networks for machine learning. Codementor Community
Lieblich E, Benvenisti E (2016) The obligation to exercise discretion in warfare: why autonomous weapons systems are unlawful. In: Bhuta N et al. (eds) Autonomous weapons systems: law, ethics, policy. Cambridge University Press. https://doi.org/10.2139/ssrn.2479808
Lior A (2020) AI entities as AI agents: artificial intelligence liability and the AI respondeat superior analogy. Mitchell Hamline Law Rev 46(5). https://open.mitchellhamline.edu/cgi/viewcontent.cgi?article=1223&context=mhlr
Loomis v. Wisconsin (2017) 881 N.W.2d 749 (Wis. 2016), cert. denied, 137 S.Ct. 2290
Malgieri G (2019) Automated decision-making in the EU Member States: the right to explanation and other “suitable safeguards” for algorithymic decisions in the EU National Legislations. Comput Law Secur Rev. https://doi.org/10.2139/ssrn.3233611
Malgieri G, Pasquale F (2022) From transparency to justification: toward ex ante accountability for AI. Brussels Privacy Working Paper 33. https://doi.org/10.2139/ssrn.4099657
Ntoutsi E et al (2020) Bias in data-driven AI systems – an introductory survey. https://arxiv.org/pdf/2001.09762.pdf
Reis J et al (2019) Artificial intelligence in government services: a systematic literature review, In: Advances in Intelligent Systems and Computing vol 930. https://doi.org/10.1007/978-3-030-16181-1_23
Reis J et al (2021) Influence of artificial intelligence on public employment and its impact on politics: a systematic literature review. Braz J Oper Prod Manag 18(3). https://doi.org/10.14488/BJOPM.2021.010
Slobogin C (2021) Preventative justice: how algorithms, parole boards and limiting retributivism could end mass incarceration. Wake Forest Law Rev 56(1):97–168
Stobbs N et al (2017) Can sentencing be enhanced by the use of artificial intelligence. Crim Law J 41:261–277
Sun T, Medaglia R (2019) Mapping the challenges of artificial intelligence in the public sector: evidence from public healthcare. Gov Inf Q 36(2):368–383. https://doi.org/10.1016/j.giq.2018.09.008
The White House (2009) Transparency and open government. https://obamawhitehouse.archives.gov/the-press-office/transparency-and-open-government. Accessed 2 Nov 2023
Wachter S et al (2016) Why a right to explanation of automated decision-making does not exist in the general data protection regulationhttps://doi.org/10.2139/ssrn.2903469
Weber M (1994) Parliament and government in Germany under a new political order. In: Weber: Political Writings. Lassman P, Speirs R (eds) Cambridge Texts in the History of Political Thought. Cambridge University Press, p 160Cambridge. https://doi.org/10.1017/CBO9780511841095
Zarsky T (2013) Transparent predictions. Univ Ill Law Rev 2013(4):1503–1569
Zhu L et al (2019) A study on predicting loan default based on the random forest algorithm. Procedia Comput Sci 162:503–513. https://doi.org/10.1016/j.procs.2019.12.017
Zuboff S et al (2015) Big other: surveillance capitalism and the prospects of an information civilization. J Inf Technol 30(1):75–89. https://doi.org/10.1057/jit.2015.5
Author information
Authors and Affiliations
Contributions
Co-authored.
Corresponding author
Ethics declarations
Ethics Approval
Not applicable.
Consent to Participate
Not applicable.
Research Involving Human Participants and/or Animals
Not applicable.
Competing Interests
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Harel, A., Perl, G. Can AI-Based Decisions be Genuinely Public? On the Limits of Using AI-Algorithms in Public Institutions. Jus Cogens 6, 47–64 (2024). https://doi.org/10.1007/s42439-023-00088-7
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s42439-023-00088-7