Skip to main content

Advertisement

Log in

Criminal courts’ artificial intelligence: the way it reinforces bias and discrimination

  • Commentary
  • Published:
AI and Ethics Aims and scope Submit manuscript

Abstract

Embracive, pervasive, and unstoppable global algorithmization greatly influences the deployment of artificial intelligence systems in criminal courts to replace obsolete bail and sentencing practices, reduce recidivism risk, and modernize judicial practices. Since artificial intelligence systems have provably appeared to have the duality of golden promises and potential perils, applying such a system in the justice system also entails some associated risks. Hence, allocating this unchecked-novel resource in judicial domains sparks vigorous debate over their legal and ethical implications. With such backgrounds, this paper examines how and why artificial intelligence systems reinforce bias and discrimination in society and suggest what approach could be an alternative to the current predictive justice mechanisms in use.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. For this paper, artificial intelligence (AI) includes automated computer programs capable of helping or replacing the traditional judicial decision-making methods (e.g., algorithms used in predictive analytics).

  2. The term ‘predictive’ in AI jargon is contextually linked to the possibility of predicting future results through inductive analysis, which identifies correlations between input and output data (see Ref. [89]). And the term 'preventive justice' was first used in the late eighteenth century and aimed at preventing future crime ([72], p. 1753). Over time, preventive justice schemes have been reinvigorated with risk assessment algorithms to predict recidivism risk.

  3. The ‘COMPAS’ is created by the for-profit company Northpointe (which rebranded itself to ‘Equivant’ in January 2017). The recidivism risk scale of COMPAS has been in use since 2000 (see Ref. [31]).

  4. As for predictive justice, there are nowadays reportedly ‘more than 200 risk assessment tools available in criminal justice and forensic psychiatry, which are widely used to inform sentencing, parole decisions, and post-release monitoring’ (see Ref. [63]).

  5. 881 N.W.2d 749 (Wis. 2016), cert. denied, 137S Ct. 2290 (2017); this was a Wisconsin Supreme Court case that was appealed to the United States Supreme Court but denied on June 26, 2017 ( see Ref. [91]).

  6. Algorithmic bias describes systematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others. Bias can emerge due to many factors, e.g., the design of the algorithm or the way data is coded, collected, selected, or used to train the algorithm (see Ref. [59]).

  7. At Google, only 21%, and at Facebook, only 22% of technical roles are filled by women. This estimate came from tallying the numbers of men and women who had contributed work at three top machine learning conferences in 2017 (see Ref. [92]).

  8. ProPublica, a nonprofit organization, aiming to produce investigative journalism in the public interest, looked at risk scores assigned to over 7000 people in Broward County, Florida, and checked to see how many were charged over the next 2 years, and found that COMPAS tool was ‘biased against blacks’. Such findings provoked a public debate about the problems of automating government systems [57].

  9. However, a rejoinder was also made to this investigation report (see Ref. [37]).

  10. An interesting difference between intelligence and wisdom is knowledge is knowing that a tomato is a fruit; wisdom is not putting it in a fruit salad (Miles Klington’s witticism).

  11. A recent example includes that a new bill is introduced in the senate of the United States in April of 2019 [7] that represents one of the first major efforts to regulate AI [2]. The new bill would require big companies to audit their machine-learning systems for bias and discrimination in an ‘impact assessment’, and take corrective action in a timely manner if such issues were identified. Notably, the US is not alone in this endeavor; jurisdictions like UK, France, Australia, and others have all recently drafted or passed legislation to hold tech companies accountable for their algorithms (see Ref. [53]).

  12. The European Commission for the Efficiency of Justice (CEPEJ) of the Council of Europe has adopted the first European Ethical Charter on the use of artificial intelligence in judicial systems. This first European text sets out ethical principles relating to the use of artificial intelligence (AI) in judicial systems (see Ref. [35]).

  13. The guideline sets forth 12 principles that are intended to guide the design, development, and deployment of AI, and frameworks for policy and legislation.

  14. AI’s potential can address other specific challenges that criminal courts face, such as processing and managing digital data, information-sharing, improving case management, evidence management, cyber security, resource allocation, and language translation, etc. (see Ref. [83]).

References

  1. Collosa, A.: Algorithms, biases, and discrimination in their use: about recent judicial rulings on the subject. https://www.ciat.org/ciatblog-algorithms-biases-and-discrimination-in-their-use-about-recent-judicial-rulings-on-the-subject/?lang=en (2021)

  2. Algorithmic Accountability Act Targets AI Bias. http://www.jonesday.com; Jones Day. https://www.jonesday.com/en/insights/2019/06/proposed-algorithmic-accountability-act; Algorithmic Accountability Act of 2019. https://www.wyden.senate.gov/imo/media/doc/Algorithmic%20Accountability%20Act%20of%202019%20Bill%20Text.pdf?utm_campaign=the_algorithm.unpaid.engagement&utm_source=hs_email&utm_medium=email&_hsenc=p2ANqtz-QLmnG4HQ1A-IfP95UcTpIXuMGTCsRP6yF2OjyXHH-66cuuwpXO5teWKx1dOdk-xB0b9 (2019). Accessed 23 Jan 2020

  3. Allsop, C.J.: Technology and the future of the courts. The Federal Court of Australia. https://www.fedcourt.gov.au/digital-law-library/judges-speeches/chief-justice-allsop/allsop-cj-20190326. Accessed 26 Mar 2019

  4. Murray, A.: Almost human: law and human agency in the time of artificial intelligence. In: Sixth annual T.M.C. Asser Lecture, TMC Asser Press (2021)

  5. Ruha Benjamin. Race after technology: abolitionist tools for the new Jim code. Polity Press, 2019. [Especially for bias and default discrimination]

  6. Barabas, C., Dinakar, K., Ito, J., Virza, M., Zittrain, J.: Interventions over predictions: reframing the ethical debate for actuarial risk assessment. arXiv:1712.08238 [cs, stat]. https://arxiv.org/abs/1712.08238 (2019). Accessed 5 Dec 2020

  7. Bernard, Z.: The first bill to examine “algorithmic bias” in government agencies has just passed in New York City. Business Insider. http://www.businessinsider.com/algorithmic-bias-accountability-bill-passes-in-new-york-city-2017-12?IR=T (2017). Accessed 5 Dec 2020

  8. Big Data: A report on algorithmic systems, opportunity, and civil rights. Executive Office of the President. https://permanent.fdlp.gov/gpo90618/2016_0504_data_discrimination.pdf (2016). Accessed 16 Sep 2020

  9. Borgesius, F.Z.: Discrimination, artificial intelligence, and algorithmic decision-making. Council of Europe. https://rm.coe.int/discrimination-artificial-intelligence-and-algorithmic-decision-making/1680925d73 (2018)

  10. Brauneis, R., Goodman, E.P.: Algorithmic transparency for the smart city. 20 Yale J. Law Technol. pp.103, 114. https://yjolt.org/sites/default/files/20_yale_j._l._tech._103.pdf (2018). Accessed 13 Dec 2019

  11. Buolamwini, J., Gebru, T., Friedler, S., Wilson, C.: Gender shades: intersectional accuracy disparities in commercial gender classification *. Proc. Mach. Learn. Res. 81, 1–15. https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf (2018)

  12. Cath, C.: Governing artificial intelligence: ethical, legal and technical opportunities and challenges. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 376(2133), 20180080 (2018). https://doi.org/10.1098/rsta.2018.0080

    Article  Google Scholar 

  13. Caliskan, A., Bryson, J.J., Narayanan, A.: Semantics derived automatically from language corpora contain human-like biases. Science 356(6334), 183–186. https://science.sciencemag.org/content/356/6334/183 (2017). Accessed 4 Jul 2020

  14. Perez, C.C.: Invisible women: exposing data bias in a world designed for men. New York: Abrams Press (2019).

  15. Carlson, A.: The need for transparency in the age of predictive sentencing algorithms. Iowa Law Rev. https://ilr.law.uiowa.edu/assets/Uploads/ILR-103-1-Carlson.pdf (2017)

  16. Casey, P.M.: Using offender risk and needs assessment information at sentencing guidance for courts from a National Working Group. NCSC. https://www.ncsc.org/data/assets/pdf_file/0016/26251/final-pew-report-updated-10-5-15.pdf (2011)

  17. Chander, A.: The racist algorithm?. Michgan Law Rev., 115(6), 1023. http://michiganlawreview.org/wp-content/uploads/2017/04/115MichLRev1023_Chander.pdf (2017)

  18. Chou, Oscar, & Roger. (2017, October 12). What The Kids’ Game “Telephone” Taught Microsoft About Biased AI. Fast Company. https://www.fastcompany.com/90146078/what-the-kids-game-telephone-taught-microsoft-about-biased-ai#:~:text=AI%20chatbots%20are%20susceptible%20to. Accessed 16 Sept 2020

  19. Chan, J.: In a local first, Sabah court gives out sentence assisted by AI, Malay Mail. https://www.malaymail.com/news/malaysia/2020/02/19/in-a-local-first-sabah-court-gives-out-sentence-assisted-by-ai/1838906 (2020). Accessed 16 Sep 2020

  20. Christin, A., Rosenblat, A., Boyd, D.: Courts and predictive algorithms. Data & civil rights: a new era of policing and justice. https://www.law.nyu.edu/sites/default/files/upload_documents/Angele%20Christin.pdf (2015). Accessed 18 Sep 2020

  21. Citron, D.: (Un)Fairness of risk scores in criminal sentencing. Forbes. https://www.forbes.com/sites/daniellecitron/2016/07/13/unfairness-of-risk-scores-in-criminal-sentencing/ (2016). Accessed 5 Dec 2020

  22. Miller, C.C.: Hidden bias: when algorithms discriminate. The New York Times. https://www.nytimes.com/2015/07/10/upshot/when-algorithms-discriminate.html (2015)

  23. Coalition for Critical Technology. Abolish the #TechToPrisonPipeline. Medium. https://medium.com/@CoalitionForCriticalTechnology/abolish-the-techtoprisonpipeline-9b5b14366b16 (2020). Accessed 22 Jan 2021

  24. Corbett Davies, S., et al.: A computer program used for bail and sentencing decisions was labeled biased against blacks. It’s actually not that clear. The Washington Post. https://www.washingtonpost.com/news/monkey-cage/wp/2016/10/17/can-an-algorithm-be-racist-our-analysis-is-more-cautious-than-propublicas/ (2016). Accessed 25 Apr 2020

  25. Corbett Davies, S., et al.: Algorithmic decision making and the cost of fairness. In: Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining. https://5harad.com/papers/fairness.pdf (2017). Accessed 18 Sep 2020

  26. Crawford, K.: The hidden biases in big data. Harv. Bus. Rev. https://hbr.org/2013/04/the-hidden-biases-in-big-data (2018). Accessed 23 Jul 2020

  27. Danziger, S., Levav, J., Avnaim-Pesso, L.: Extraneous factors in judicial decisions. Proc. Natl. Acad. Sci. 108(17), 6889–6892. https://www.pnas.org/content/108/17/6889 (2011). Accessed 2 Feb 2020

  28. Dieterich, et al.: COMPAS risk scales: demonstrating accuracy equity and predictive parity. Technical Report, Northpointe Inc. https://go.volarisgroup.com/rs/430-MBX-989/images/ProPublica_Commentary_Final_070616.pdf (2016). Accessed 18 Sept 2020

  29. Dignum, V.: On bias, black-boxes and the quest for transparency in AI. Delft Design for Values Institute. https://www.delftdesignforvalues.nl/2018/on-bias-black-boxes-and-the-quest-for-transparency-in-artificial-intelligence/ (2018). Accessed 22 Apr 2020

  30. Dignum, V. (2019). Responsible artificial intelligence: how to develop and use AI in a responsible way, p. 59. Springer.

    Book  Google Scholar 

  31. Dressel, J., Farid, H.: The accuracy, fairness, and limits of predicting recidivism. Sci. Adv. 4(1), eaao5580. https://advances.sciencemag.org/content/advances/4/1/eaao5580.full.pdf (2018). Accessed 22 Jan 2021

  32. Dzindolet, M.T., et al.: The role of trust in automation reliance. Int. J. Hum. Comput. Stud. 58(6), 697–718 (2003)

    Article  Google Scholar 

  33. Eckhouse, L.: Opinion | Big data may be reinforcing racial bias in the criminal justice system. Washington Post. https://www.washingtonpost.com/opinions/big-data-may-be-reinforcing-racial-bias-in-the-criminal-justice-system/2017/02/10/d63de518-ee3a-11e6-9973-c5efb7ccfb0d_story.html (2017)

  34. Electronic Privacy Information Center: EPIC - algorithms in the criminal justice system: pre-trial risk assessment tools. Epic.org. https://epic.org/algorithmic-transparency/crim-justice/ (2014)

  35. European Commission for the Efficiency of Justice (CEPEJ): CEPEJ European Ethical Charter on the use of artificial intelligence (AI) in judicial systems and their environment. European Commission for the Efficiency of Justice (CEPEJ). https://www.coe.int/en/web/cepej/cepej-european-ethical-charter-on-the-use-of-artificial-intelligence-ai-in-judicial-systems-and-their-environment (2018). Accessed 22 Jan 2021

  36. FRA: In brief - big data, algorithms and discrimination. European Union Agency for Fundamental Rights. https://fra.europa.eu/en/publication/2018/brief-big-data-algorithms-and-discrimination (2018)

  37. Flores, A.: False positives, false negatives, and false analyses: a rejoinder to “machine bias: there’s software used across the country to predict future criminals. And it’s biased against blacks.” http://www.crj.org/assets/2017/07/9_Machine_bias_rejoinder.pdf (2017). Accessed 22 Jan 2021

  38. Friedman, B., Nissenbaum, H.: Bias in computer systems. ACM Trans. Inf. Syst. TOIS 14(3), 330–347 (1996)

    Article  Google Scholar 

  39. Global Legal Monitor: Netherlands: court prohibits government’s use of AI software to detect welfare fraud. https://www.loc.gov/law/foreign-news/article/netherlands-court-prohibits-governments-use-of-ai-software-to-detect-welfare-fraud/ (2020). Accessed 22 Jan 2021

  40. Goodman, B., & Flaxman, S. European Union Regulations on Algorithmic Decision-Making and a “Right to Explanation.” AI Magazine, 38(3):50–57 (2017). https://doi.org/10.1609/aimag.v38i3.2741

  41. Greengard, S.: Algorithms in the courtroom. Commun. ACM. https://cacm.acm.org/news/244263-algorithms-in-the-courtroom/fulltext (2020). Accessed 19 Sep 2020

  42. Greenleaf, G.: Global tables of data privacy laws and bills (5th Ed 2017). Papers.ssrn.com. https://ssrn.com/abstract=2992986 (2017)

  43. Hao, K., Stray, J.: Can you make AI fairer than a judge? Play our courtroom algorithm game. MIT Technol. Rev. https://www.technologyreview.com/2019/10/17/75285/ai-fairer-than-judge-criminal-risk-assessment-algorithm/ (2019). Accessed 9 Sep 2020

  44. Harcourt, B.E.: Against prediction: sentencing, policing, and punishing in an actuarial age, p. 6. University of Chicago Press, (2005).

    Google Scholar 

  45. Heilweil, R.: Why algorithms can be racist and sexist. Vox. https://www.vox.com/recode/2020/2/18/21121286/algorithms-bias-discrimination-facial-recognition-transparency (2020). Accessed 22 Jan 2021

  46. Heaven, W.: Predictive policing algorithms are racist. They need to be dismantled. MIT Technol. Rev. https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/ (2020). Accessed 20 Mar 2020

  47. Hodgson, C.: AI tools in US criminal justice branded unreliable by researchers. Financial Times. https://www.ft.com/content/7b6c424c-676e-11e9-a79d-04f350474d62 (2019). Accessed 30 Mar 2021

  48. Israni, E., Chang, E. (eds.): Algorithmic due process: mistaken accountability and attribution in State v. Loomis. Harv. J. Law Technol. https://jolt.law.harvard.edu/digest/algorithmic-due-process-mistaken-accountability-and-attribution-in-state-v-loomis-1 (2017). Accessed 23 Mar 2020

  49. Israni, E.T.: Opinion | when an algorithm helps send you to prison. The New York Times. https://www.nytimes.com/2017/10/26/opinion/algorithm-compas-sentencing-bias.html (2017). Accessed 23 Jul 2020

  50. Johnson, R.C.: Overcoming AI bias with AI fairness. Commun. ACM. https://cacm.acm.org/news/233224-overcoming-ai-bias-with-ai-fairness/fulltext (2018). Accessed 16 Sep 2020

  51. Angwin, J., Larson, J.: MACHINE BIAS ProPublica responds to company’s critique of machine bias story. ProPublica. https://www.propublica.org/article/propublica-responds-to-companys-critique-of-machine-bias-story (2016). Accessed 15 Dec 2019

  52. Kann, D.: What the criminal justice system costs you. CNN. https://edition.cnn.com/2018/06/28/us/mass-incarceration-five-key-facts/index.html (2018)

  53. Hao, K.: Congress wants to protect you from biased algorithms, deepfakes, and other bad AI. MIT Technol. Rev. https://www.technologyreview.com/2019/04/15/1136/congress-wants-to-protect-you-from-biased-algorithms-deepfakes-and-other-bad-ai/ (2019). Accessed 19 Sep 2020

  54. Hao, K., Straya, J.: Can you make AI fairer than a judge? Play our courtroom algorithm game. MIT Technol. Rev. https://www.technologyreview.com/2019/10/17/75285/ai-fairer-than-judge-criminal-risk-assessment-algorithm/ (2019). Accessed 22 Jan 2021

  55. Freeman, K.: Algorithmic injustice: how the Wisconsin Supreme Court failed to protect due process rights in State v. Loomis. N. C. J. Law Technol. 18, 75–99 (2016)

    Google Scholar 

  56. Heilbrun, K.: Risk assessment in evidence-based sentencing: context and promising uses. Chap. J. Crim. Just. 1, 127 (2009)

    Google Scholar 

  57. Kirkpatrick, K.: Battling algorithmic bias: how do we ensure algorithms treat us fairly? Commun. ACM 59(10), 16–17 (2016)

    Article  Google Scholar 

  58. Kirkpatrick, K.: It’s not the algorithm, it’s the data. Commun. ACM 60(2), 21–23 (2017)

    Article  MathSciNet  Google Scholar 

  59. Kitchin, R.: Thinking critically about and researching algorithms. Inf. Commun. Soc. 20(1), 14–29 (2016)

    Article  Google Scholar 

  60. Kleinberg, J., Ludwig, J., Mullainathan, S., Sunstein, C.R.: Discrimination in the age of algorithms. J. Leg. Anal. 10, 144 (2018)

    Google Scholar 

  61. Koepke, L.: A reality check: algorithms in the courtroom. Medium. https://medium.com/equal-future/a-reality-check-algorithms-in-the-courtroom-7c972da182c5 (2017). Accessed 17 Sep 2020

  62. Koepke, L.: Pre-trial algorithms deserve a fresh look, study suggests. Medium. https://medium.com/equal-future/pre-trial-algorithms-deserve-a-fresh-look-study-suggests-712e97558a70 (2019). Accessed 17 Sep 2020

  63. Ligeti, K.: AIDP-IAPL international congress of penal law: artificial intelligence and criminal justice. http://www.penal.org/sites/default/files/Concept%20Paper_AI%20and%20Criminal%20Justice_Ligeti.pdf (2019). Accessed 2 Oct 2020

  64. Liu, et al.: Beyond State v Loomis: artificial intelligence, government algorithmization and accountability. Int. J. Law Inf. Technol. 27(2), 122–141 (2019)

    Article  Google Scholar 

  65. Lloyd & Hamilton: Lloyd & Hamilton. Bias Amplification in Artificial Intelligence Systems. (2018) ArXiv, abs/1809.07842. https://arxiv.org/ftp/arxiv/papers/1809/1809.07842.pdf

  66. Loomis v. Wisconsin, No. 16-6387 (U.S) (2016). https://www.scotusblog.com/wp-content/uploads/2017/05/16-6387-CVSG-Loomis-AC-Pet.pdf. Accessed 22 Jan 2022

  67. Malek, M.A.: Quantification in criminal courts. Medium. https://medium.com/ab-malek/quantification-in-criminal-courts-d9162f75004b (2021). Accessed 22 Jan 2021

  68. Malek, M.A.: Quantification in criminal courts. Medium. https://towardsdatascience.com/quantification-in-criminal-courts-d9162f75004b (2021) Accessed 22 Jan 2021

  69. McSherry, B.: Risk assessment, predictive algorithms and preventive justice. In: Pratt, J., Anderson, J. (eds.) Criminal justice, risk and the revolt against uncertainty. Palgrave studies in risk, crime and society. Palgrave Macmillan, Cham (2020). https://doi.org/10.1007/978-3-030-37948-3_2

    Chapter  Google Scholar 

  70. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. A Survey on Bias and Fairness in Machine Learning. ACM Computing Surveys, 54(6):1–35 (2021). https://doi.org/10.1145/3457607

  71. Mittelstadt, B.D., Allo, P., Taddeo, M., Wachter, S., Floridi, L.: The ethics of algorithms: mapping the debate. Big Data Soc. 3(2), 205395171667967 (2016). https://doi.org/10.1177/2053951716679679

    Article  Google Scholar 

  72. Morrison, W. (ed.): Blackstone’s commentaries on the Laws of England. Volume I-IV, p. 1753. Routledge, (2001).

  73. Obermeyer, Z., Powers, B., Vogeli, C., Mullainathan, S.: Dissecting racial bias in an algorithm used to manage the health of populations. Science 366(6464), 447–453 (2019). https://doi.org/10.1126/science.aax2342

    Article  Google Scholar 

  74. O’Reilly-Shah, V.N., et al.: Bias and ethical considerations in machine learning and the automation of perioperative risk assessment. Br. J. Anaesth. 125(6), 843–846 (2020). https://doi.org/10.1016/j.bja.2020.07.040

    Article  Google Scholar 

  75. Osborne, J.W.: Best practices in quantitative methods. Sage Publications, (2008). http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.873.7379&rep=rep1&type=pdf. Accessed 18 Sept 2020

    Book  Google Scholar 

  76. Pasquale, F.: Secret algorithms threaten the rule of law. MIT Technol. Rev. https://www.technologyreview.com/2017/06/01/151447/secret-algorithms-threaten-the-rule-of-law/ (2017). Accessed 18 Sep 2020

  77. Paris Innovation Review. (2017). Predictive justice: when algorithms pervade the law- Paris Innovation Review. (2017). http://parisinnovationreview.com/articles-en/predictive-justice-when-algorithms-pervade-the-law

  78. Perry, W.L., et al.: Predictive policing: the role of crime forecasting in law enforcement operations. Rand.org. https://www.rand.org/pubs/research_reports/RR233.html (2013)

  79. Piana, D.: Algorithms in the courthouse. MIT Technol. Rev. Insights. https://insights.techreview.com/predicting-justice-what-if-algorithms-entered-the-courthouse/ (2019). Accessed 18 Sep 2020

  80. Powles, J.: The seductive diversion of ‘solving’ bias in artificial intelligence. Medium. https://medium.com/s/story/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53 (2018). Accessed 15 Sep 2020

  81. ProPublica: Machine Bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (2016)

  82. Rahnama, K.: Science and ethics of algorithms in the courtroom. J. Law Technol. Policy. http://illinoisjltp.com/journal/wp-content/uploads/2019/05/Rahnama.pdf (2017). Accessed 19 Sep 2020

  83. Redden, J., Banks, D., Criminal Justice Testing and Evaluation Consortium: Artificial intelligence applications for criminal courts. U.S. Department of Justice, National Institute of Justice, Office of Justice Programs. https://cjtec.org/files/5f5f943055f95 (2020). Accessed 1 Jan 2022

  84. Reuters: Amazon ditched AI recruiting tool that favored men for technical jobs. The Guardian. https://www.theguardian.com/technology/2018/oct/10/amazon-hiring-ai-gender-bias-recruiting-engine (2018)

  85. Re, R.M., Solow-Niederman, A.: Developing artificially intelligent justice. Stanf. Technol. Law Rev. 22, 242–289 (2019)

    Google Scholar 

  86. Rosenberg, J.: Only humans, not computers, can learn or predict. TechCrunch. https://techcrunch.com/2016/05/05/only-humans-not-computers-can-learn-or-predict/ (2016). Accessed 5 Dec 2020

  87. Roth, A.: Machine testimony. Yale Law J. 126, 1972–2053 (2017)

    Google Scholar 

  88. Rouvroy, A., Berns, T.: Algorithmic governmentality and prospects of emancipation disparateness as a precondition for individuation through relationships?. Réseaux 177(1), 163–196. https://www.cairn-int.info/article-E_RES_177_0163--algorithmic-governmentality-and-%20prospect.htm# (2013)

  89. Sadhu Singh, D.J.K.: Ethical questions, risks of using AI in “predictive justice.” New Straits Times. https://www.nst.com.my/opinion/columnists/2020/02/565890/ethical-questions-risks-using-ai-predictive-justice (2020). Accessed 2 Oct 2020

  90. Schimel, B., Tseytlin, M.: Brief in opposition in Loomis. p. 13. https://www.scotusblog.com/wpcontent/uploads/2017/02/16-6387-BIO.pdf (2017)

  91. SCOTUSblog: Loomis v. Wisconsin, No. 16-6387 (U.S. Oct. 5, 2016). https://www.scotusblog.com/case-files/cases/loomis-v-wisconsin/ (2017). Accessed 18 Sep 2020

  92. Simonite, T.: Algorithms should’ve made courts more fair. What went wrong? Wired. https://www.wired.com/story/algorithms-shouldve-made-courts-more-fair-what-went-wrong/ (2019). Accessed 19 Sep 2020

  93. Smith, R.A.: Opening the lid on criminal sentencing software. Duke.edu. https://today.duke.edu/2017/07/opening-lid-criminal-sentencing-software (2017). Accessed 20 Nov 2020

  94. Starr, S.B.: Evidence-based sentencing and the scientific rationalization of discrimination. Stanf. Law Rev. 66(4), 815–816 (2014)

    Google Scholar 

  95. State v. Loomis: Wisconsin Supreme Court requires warning before use of algorithmic risk assessments in sentencing. Harv. Law Rev. 130(5), 1530–1537. https://harvardlawreview.org/2017/03/state-v-loomis/#:~:text=Wisconsin%20Supreme%20Court%20Requires%20Warning (2017). Accessed 16 Sep 2020

  96. Tashea, J.: Courts are using AI to sentence criminals. That must stop now. Wired. https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/ (2017). Accessed 18 Sep 2020

  97. The Public Voice: Universal guidelines for artificial intelligence, Brussels, Belgium. https://thepublicvoice.org/ai-universal-guidelines/ (2018). Accessed 19 Sep 2020

  98. UNODC - United Nations Office on Drugs and Crime: The use of artificial intelligence in the administration of justice. YouTube. https://www.youtube.com/watch?v=ozfY8tqVjLs&list=LLxz7K6l-JPlRzN_gU8Ew2ZA&index=1&t=2750s (2020). Accessed 6 Oct 2020

  99. ACM: Public Policy Council releases statement and principles on algorithmic bias. Association for Computing Machinery. https://www.acm.org/articles/bulletins/2017/january/usacm-statement-algorithmic-accountability (2017). Accessed 20 Nov 2020

  100. Wachter, S., Mittelstadt, B., Russell, C.: Bias preservation in machine learning: the legality of fairness metrics under EU non-discrimination law. SSRN Electron. J. (2021). https://doi.org/10.2139/ssrn.3792772

    Article  Google Scholar 

  101. Schiek, et al. Cases, materials and text on national, supranational and international non-discrimination law. Hart Publishing (2007)

  102. Wexler, R.: Opinion | When a computer program keeps you in jail. The New York Times. https://www.nytimes.com/2017/06/13/opinion/how-computers-are-harming-criminal-justice.html (2017). Accessed 18 Sept 2020

  103. Williams, J.: EFF urges California to place meaningful restrictions on the use of pretrial risk assessment tools. Electronic Frontier Foundation. https://www.eff.org/deeplinks/2018/12/eff-urges-california-place-meaningful-restrictions-use-pretrial-risk-assessment (2018). Accessed 19 Sep 2020

  104. Wisser, L.: Pandora’s algorithmic black box: the challenges of using algorithmic risk assessments in sentencing. Am. Crim. Law Rev. 56(4), 1811–1832. https://www.law.georgetown.edu/american-criminal-law-review/in-print/volume-56-number-4-fall-2019/pandoras-algorithmic-black-box-the-challenges-of-using-algorithmic-risk-assessments-in-sentencing/ (2019). Accessed 5 Dec 2020

  105. World Economic Forum: How to prevent discriminatory outcomes in machine learning (white paper). https://www.weforum.org/whitepapers/how-to-prevent-discriminatory-outcomes-in-machine-learning (2018)

  106. Wolfers, J., Leonhardt, D., Quealy, K.: 1.5 Million missing black men (published 2015). The New York Times. http://www.nytimes.com/interactive/2015/04/20/upshot/missing-black-men.html (2015). Accessed 5 Dec 2020

  107. Yong, E.: A popular algorithm is no better at predicting crimes than random people. The Atlantic. https://www.theatlantic.com/technology/archive/2018/01/equivant-compas-algorithm/550646/ (2018)

  108. Završnik, A.: Algorithmic justice: algorithms and big data in criminal justice settings. Euro J Criminol. 18, 623–642 (2019). https://doi.org/10.1177/1477370819876762

    Article  Google Scholar 

  109. Završnik, A.: Criminal justice, artificial intelligence systems, and human rights. ERA Forum 20, 567–583 (2020). https://doi.org/10.1007/s12027-020-00602-0. (Accessed 24 Mar 2020)

    Article  Google Scholar 

  110. Zeng, Y., Lu, E., Huangfu, C.: Linking artificial intelligence principles artificial intelligence principles: different school of thoughts. In: AAAI workshop on artificial intelligence safety. https://arxiv.org/ftp/arxiv/papers/1812/1812.04814.pdf (2019)

  111. Žliobaitė, I.: Measuring discrimination in algorithmic decision making. Data Min. Knowl. Discov. 31(4), 1060–1089 (2017). https://doi.org/10.1007/s10618-017-0506-1

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Md. Abdul Malek.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Malek, M.A. Criminal courts’ artificial intelligence: the way it reinforces bias and discrimination. AI Ethics 2, 233–245 (2022). https://doi.org/10.1007/s43681-022-00137-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s43681-022-00137-9

Keywords

Navigation