Skip to main content

AI Assistance in the Courtroom and Immediacy

  • Chapter
  • First Online:
Fairness in Criminal Appeal
  • 367 Accesses

Abstract

This chapter will try to perceive if the use of AI assistants in the courtroom, during the hearings of a testimony, may be sufficient to accomplish the criteria set out by ECtHR.

AI assistants may be used as a tool in the appeal phase, since the judge will have access to an algorithm analysis of the relevant and decisive proof, to select which testimony is ‘relevant’ and ‘determinant’ to prove certain facts and is credible or not, to avoid the repetition in the appeal phase of each hearing that may be indicated in the appeal.

The use of AI assistants may promote a fairer, more efficient and objective justice system since AI could avoid human prejudice.

Nevertheless, to avoid the problem of the ‘black box’, a human judge must always be involved in this procedure and the result of the AI system must be comprehensible. Problematic will be when one resorts to algorithms which could not be apprehensible to judges or even to the experts, and do not have stable software codes. In this case the use of this AI system will jeopardize the principle of immediacy and the right to a fair trial.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 16.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 129.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    CEPEJ (2019), p. 69.

  2. 2.

    Ulenaers (2020), p. 5.

  3. 3.

    Contini (2020), p. 5.

  4. 4.

    Seng and Mason (2021), pp. 243, 258 ff. About machine learning evidence, Nutter (2019), passim.

  5. 5.

    Council of Europe (2018), p. 3.

  6. 6.

    Sourdin (2018), p. 1130 and Sourdin and Cornes (2018), p. 94 (referring to it as ‘co-bots’). With the same opinion regarding the attorney, Surden (2014), p. 101.

  7. 7.

    Ulenaers (2020), p. 11.

  8. 8.

    Lim (2021), pp. 285–286.

  9. 9.

    Gillespie (2014), p. 167.

  10. 10.

    Nutter (2019), pp. 935–936.

  11. 11.

    Council of Europe (2018), pp. 5–7.

  12. 12.

    Contini (2020), p. 9.

  13. 13.

    Bampasika (2020). ‘[m]achine learning involves computer algorithms that have the ability to “learn” or improve in performance over time on some task’, Surden (2014), p. 88.

  14. 14.

    About these different tasks, Nutter (2019), p. 929.

  15. 15.

    Nutter (2019), p. 930.

  16. 16.

    Nutter (2019), pp. 933–935.

  17. 17.

    Contini (2020), pp. 11–12. In double screen system the judge has a screen. The case parties have other. Therefore, they can control in real-time the hearing report made by the programme and can ask for amendments, Contini (2020), pp. 11–12. As to the problem of due process, according to the Fifth Amendment to the American Constitution, there is a proposal that machine learning output will be a form of expert testimony, so the defendant may have the opportunity to cross-examine an expert on the machine’s capabilities and process, Nutter (2019), pp. 945–948.

  18. 18.

    Ash (2018), pp. 1–2.

  19. 19.

    Mendes (2020), p. 60.

  20. 20.

    Sourdin (2018), p. 1131.

  21. 21.

    Sourdin and Cornes (2018), pp. 91–92.

  22. 22.

    Contini (2020), p. 6.

  23. 23.

    Contini (2020), p. 6.

  24. 24.

    Council of Europe (2018), pp. 8–9.

  25. 25.

    Mendes (2010), p. 1000.

  26. 26.

    Mendes (2010), p. 1011.

  27. 27.

    Taruffo (2005), pp. 424–425.

  28. 28.

    Dias (2004), pp. 202–205. This idea was also visible in the thought of M C Ferreira, according to whom the judge, instead of being linked to pre-fixed and abstract norms on the assessment of evidence, must be subordinate to the principles of evidentiary law and to norms of experience, logic, and incontestable rules of scientific nature, Ferreira (1986), pp. 211–212. In this regard, Melim (2013), pp. 152–153.

  29. 29.

    Neves (1968), pp. 50–51. Also, excepting the system of free proof as a kind of arbitrariness, based on the ideas that free proof is scientific proof and judge’s intime conviction was reinforced by a new requirement of reasoning decisions, Mendes (2010), pp. 1000–1001. Also, Matta (2004), pp. 254–256.

  30. 30.

    Allen (2001), p. 103. ‘Every decision maker will have an idiosyncratic belief set precisely because no two humans have lived the same lives’, Allen (2001), p. 103. ‘[t]he trial judge may exclude evidence that bears no logical relationship to the cause of action and thus the only effects of which can be to waste resources and generate erroneous conclusions. Under modern law these decisions are truly given to the discretion of the trial judge. Some precedents do arise that constrain that discretion somewhat, but not much. Thus, this form of regulation of the proof process depends crucially on the judgment of a human actor’, Allen (2001), p. 107.

  31. 31.

    Sourdin and Cornes (2018), p. 88.

  32. 32.

    Sourdin (2018), pp. 1130–1131; Sourdin and Cornes (2018), pp. 95, 100, 105, 111; Ulenaers (2020), pp. 11, 18. ‘[t]he default availability of a responsive human judge permitted to review all aspects of the AI input, and able to call on a complex array of communication and social skills, remains desirable to support understanding and compliance with the law’, Sourdin and Cornes (2018), p. 100.

  33. 33.

    Lim (2021), p. 305. ‘To achieve the same effect with AI, one would either have to use different training sets of data to achieve different predictive formulae, which would be the equivalent to knowingly citing only some relevant case law but not all of it, or to add a random factor into the equation, which would be anathema to the concept of fair and transparent judicial decision-making’, Lim (2021), p. 305.

  34. 34.

    Lim (2021), pp. 294–295, 302. ‘[t]he experience of the English courts shows that it is one thing to accept probabilistic reasoning in the evaluation of one aspect of the evidence, and another to use probabilistic reasoning in an overall assessment of the evidence’, Lim (2021), p. 296.

  35. 35.

    Matta (2004), p. 265.

  36. 36.

    Lim (2021), pp. 302–303.

  37. 37.

    Lim (2021), p. 303.

  38. 38.

    Sourdin (2018), p. 1125; Surden (2014), p. 105.

  39. 39.

    Lim (2021), p. 303.

  40. 40.

    Searle (1984), p. 28 ff. Recovering this distinction, Sourdin and Cornes (2018), p. 102.

  41. 41.

    With the proposal of ‘affective computing’, ‘computing that relates to, arises from, or influences emotions’, suggesting models for affect recognition, Picard (1995), pp. 1–2, 7–8, 14. ‘The input would be a set of observations, the output a set of probabilities for each possible emotional state’, Picard (1995), p. 7. About Hidden Markov Models for speech emotion recognition, among others, Mao et al. (2019), p. 6715 ff.

  42. 42.

    Sourdin and Cornes (2018), p. 112. Adding: ‘For any Judge AI project the problem thus arises: how to code to allow for the influence of a similar range of varying personal, human, and societal inputs in addition to reflecting legal rules and principles? The problem is especially difficult because such personal inputs, emanating from human judges’, and society’s unconscious, are by definition not consciously knowable and therefore not translatable into code. (…) While, therefore, many aspects of the judicial task may ultimately be captured in code, the human heart of the judicial process, being a combination of conscious and currently unknowable unconscious thought, remains quite literally beyond the comprehension of the most talented programmer’, Sourdin and Cornes (2018), p. 112.

  43. 43.

    Spaulding (2020), pp. 396–401.

  44. 44.

    Ulenaers (2020), pp. 11–12, 15, 18.

  45. 45.

    Lepri et al. (2018), pp. 611, 622; Bampasika (2020). Also, about unconscious judge bias, Sourdin and Cornes (2018), pp. 95–96.

  46. 46.

    Lepri et al. (2018), p. 622.

  47. 47.

    In Re JP Linaham (1943) 138 F.2d 650 (2d Cir), Justia, 652, https://law.justia.com/cases/federal/appellate-courts/F2/138/650/1481751/.

  48. 48.

    Becker (1966), p. 8 ff.

  49. 49.

    About the role of unconscious in legal reasoning and on the ‘legal self’, on ‘a psychoanalytical understanding of the judicial mind’, Sourdin and Cornes (2018), pp. 104, 110–111.

  50. 50.

    Sourdin and Cornes (2018), pp. 95–96.

  51. 51.

    Ash (2018), pp. 3, 6.

  52. 52.

    Lepri et al. (2018), pp. 612–614, 622, 624. About biases of algorithms, since they are trained on biased data, and also about their complexity and obscurity, Ash (2018), p. 4. This last author proposes a solution: to make the code open source, but the evidence weights used by the algorithm private. Then, the public will be able to verify whether the parameters were learned fairly, without knowing the particular action or evidence used by the algorithm, Ash (2018), p. 5.

  53. 53.

    Council of Europe (2018), p. 10. Bampasika (2020). About this problem of ‘black box’, Spaulding (2020), p. 389 and Ulenaers (2020), pp. 11–12, 16.

  54. 54.

    ‘[t]here is a danger that support systems based on artificial intelligence are inappropriately used by judges to “delegate” decisions to technological systems that were not developed for that purpose and are perceived as being more “objective” even when this is not the case. Great care should therefore be taken to assess what such systems can deliver and under what conditions that may be used in order not to jeopardise the right to a fair trial’, Council of Europe (2018), p. 12.

  55. 55.

    Angwin et al. (2016).

  56. 56.

    Lepri et al. (2018), pp. 615, 622 and Lim (2021), pp. 289–290. But of course, this may also be utopic: ‘There is the possibility that our own human intelligence is also incompletely explainable, and it would therefore be futile to expect humans to be able to build a completely explicable artificial intelligence’, Lim (2021), p. 288, n. 41.

  57. 57.

    CEPEJ (2019), pp. 60–61.

  58. 58.

    Picard (1995), p. 9.

  59. 59.

    Lim (2021), p. 289.

  60. 60.

    Greco (2020), pp. 34–35.

  61. 61.

    This is possible in Italy in the already mentioned system of speech-to-text, Contini (2020), p. 12.

  62. 62.

    Ulenaers (2020), p. 16.

  63. 63.

    On the perspective of the right of confrontation of the defendant, Bampasika (2020).

  64. 64.

    Sourdin and Cornes (2018), p. 99.

  65. 65.

    About expert systems, Mendes (2020), pp. 53–54. Supporting that ‘machine learning evidence will likely only be admissible in the form of expert testimony’, Nutter (2019), p. 931 ff. Nevertheless, the outputs of the AI system will not be a machine learning substantive evidence in itself. They are only a toll to analyse the testimonial evidence.

  66. 66.

    Defending that the existence of these video files may change the role of the appeal trial, eventually allowing for an adequate evaluation of demeanour evidence, Lederer (2000), pp. 260, 266 and Lederer (2021), pp. 314–315.

  67. 67.

    Greco (2020), p. 63.

  68. 68.

    Clark and Chalmers (1998), pp. 8–9.

  69. 69.

    About the concept of hybrid agents, a computational hybrid, ‘joint-agent systems’, that work on a joint performance developed by the human component and the machine component, so they share the epistemic credit for the purposes of responsibility ascription, Matthias (2016), p. 145 ff.

  70. 70.

    ‘Moreover, machine learning output would likely be introduced in the form of expert testimony, meaning the defendant would have the opportunity to cross-examine an expert on the machine’s capabilities and processes’, Nutter (2019), p. 947.

  71. 71.

    Bampasika (2020). Adding that ‘The party concerned should have access to and be able to challenge the scientific validity of an algorithm, the weighting given to its various elements and any erroneous conclusions it comes to whenever a judge suggests that he/she might use it before making his/her decision’, defending that this would be easier in European countries, because of General Data Protection Regulation, while in the United States private interests (particularly the protection of intellectual property) prevail over the rights of defence, CEPEJ (2019), p. 55.

  72. 72.

    Contini (2020), p. 15.

  73. 73.

    Bampasika (2020).

  74. 74.

    ‘Responsibility for the effects of an action requires not only control over the process of decision, but also adequate epistemic access to the world context in which the decision takes place’, Matthias (2016), p. 150.

References

  • Allen RJ (2001) Artificial intelligence and the evidentiary process: the challenges of formalism and computation. Artif Intell Law 9:99–114

    Article  Google Scholar 

  • Angwin J, Larson J, Mattu S, Kirchner L (2016) Machine bias. ProPublica, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. Accessed 6 Apr 2022

  • Ash E (2018) Judge, jury, and execute file: the brave new world of legal automation. Social Market Foundation, http://www.smf.co.uk/publications/judge-jury-and-execute-file-paper/, 1–10. Accessed 20 Apr 2022

  • Bampasika E, Artificial intelligence as evidence in criminal trial. Paper presented at SETN 2020: 11th Hellenic Conference on Artificial Intelligence, Athens, Greece, 2–4 September 2020., https://pure.mpg.de/rest/items/item_3325158_2/component/file_3325159/content. Accessed 14 Apr 2022

  • Becker HS (1966) Outsiders – studies in the sociology of deviance. Free Press, New York

    Google Scholar 

  • CEPEJ (2019) European Ethical Charter on the use of artificial intelligence in judicial systems and their environment. Council of Europe, https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c. Accessed 7 Apr 2022

  • Clark A, Chalmers D (1998) The extended mind. Analysis 58(1):7–19

    Article  Google Scholar 

  • Contini F (2020) Artificial intelligence and the transformation of humans, law and technology interactions in judicial proceedings. Law Technol Hum 2(1):4–18

    Article  Google Scholar 

  • Council of Europe (2018) Algorithms and human rights: study on the human rights dimensions of automated data processing techniques and possible regulatory implications. Prepared by the Committee of Experts on Internet Intermediaries (MSI-NET), Council of Europe study DGI (2017) 12, Strasbourg, https://rm.coe.int/algorithms-and-human-rights-en-rev/16807956b5. Accessed 7 Apr 2022

  • Dias JF (2004) Direito Processual Penal, reprint of 1st ed. from 1974. Coimbra Editora, Coimbra

    Google Scholar 

  • Ferreira MC (1986) Curso de Processo Penal, I. Danúbio, Lisboa

    Google Scholar 

  • Gillespie T (2014) The relevance of algorithms. In: Gillespie T, Boczkowski PJ, Foot KA (eds) Media technologies: essays on communication, materiality, and society. MIT Press, Cambridge, pp 167–194

    Google Scholar 

  • Greco L (2020) Poder de julgar sem responsabilidade de julgador: a impossibilidade jurídica do juiz-robô. Marcial Pons, São Paulo

    Google Scholar 

  • Lederer FI (2000) The effect of courtroom technologies on and in appellate proceedings and courtrooms. J App Pract Process 2(2):251–274

    Google Scholar 

  • Lederer FI (2021) The evolving technology-augmented courtroom before, during, and after the pandemic. Vanderbilt J Entertain Technol Law 23(2):301–339

    Google Scholar 

  • Lepri B, Oliver N, Letouzé E, Pentland A, Vinck P (2018) Fair, transparent, and accountable algorithmic decision-making processes – the premise, the proposed solutions, and the open challenges. Philos Technol 31:611–627

    Article  Google Scholar 

  • Lim S (2021) Judicial decision-making and explainable artificial intelligence – a reckoning from first principles? Singap Acad Law – Special Issue on Law and Technology 33:280–314

    Google Scholar 

  • Mao S, Tao D, Zhang G, Ching PC, Lee T (2019) Revisiting Hidden Markov models for speech emotion recognition. ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp 6715–6719

    Google Scholar 

  • Matta PS (2004) A livre apreciação da prova e o dever de fundamentação da sentença. In: Palma MF (scientific coordination) Jornadas de Direito Processual Penal e Direitos Fundamentais. Almedina, Coimbra, pp 221–279

    Google Scholar 

  • Matthias A (2016) The extended mind and the computational basis of responsibility ascription. Anatomia do Crime 3:129–153

    Google Scholar 

  • Melim MM (2013) Standards de prova e grau de convicção do julgador. Revista de Concorrência e Regulação – C&R 4(16):143–193

    Google Scholar 

  • Mendes PS (2010) A prova penal e as regras da experiência. In: Andrade MC, Antunes MJ, Sousa SA (org) Estudos em Homenagem ao Prof. Doutor Figueiredo Dias, vol III. Coimbra Editora, Coimbra, pp 997–1011

    Google Scholar 

  • Mendes PS (2020) A representação do conhecimento jurídico, inteligência artificial e os sistemas de apoio à decisão jurídica. In: Rocha ML, Pereira RS (coord), Trigo AC (colab) Inteligência Artificial § Direito, reprint. Almedina, Coimbra, pp 51–63

    Google Scholar 

  • Neves AC (1968) Sumários de Processo Criminal (1967-1968). Typed, Coimbra

    Google Scholar 

  • Nutter P (2019) Machine learning evidence: admissibility and weight. J Constit Law 21(3):919–958

    Google Scholar 

  • Picard RW (1995) Affective computing. M.I.T Media Laboratory Perceptual Computing Section Technical Report, 321, pp 1–16

    Google Scholar 

  • Searle J (1984) Can computers think? In: Searle J (ed) Minds, brains and science. Harvard University Press, Cambridge, pp 28–41

    Google Scholar 

  • Seng D, Mason S (2021) Artificial intelligence and evidence. Singap Acad Law – Special Issue on Law and Technology 33:241–279

    Google Scholar 

  • Sourdin T (2018) Judge v robot? Artificial intelligence and judicial decision-making. UNSW Law J 41(4):1114–1133

    Google Scholar 

  • Sourdin T, Cornes R (2018) Do judges need to be human? The implications of technology for responsive judging. In: Sourdin T, Zariski A (eds) The responsive judge – international perspectives. Springer, Singapore, pp 87–119

    Chapter  Google Scholar 

  • Spaulding NW (2020) Is human judgment necessary?: Artificial intelligence, algorithmic governance, and the law. In: Dubber MD, Pasquale F, Das S (eds) The Oxford handbook of ethics of AI. Oxford University Press, New York, pp 375–402

    Google Scholar 

  • Surden H (2014) Machine learning and law. Wash Law Rev 89(1):87–115

    Google Scholar 

  • Taruffo M (2005) La prueba de los hechos, 2nd ed. (trad. Beltrán JF). Trotta, Madrid

    Google Scholar 

  • Ulenaers J (2020) The impact of artificial intelligence on the right to a fair trial: towards a robot judge? Asian J Law Econ 11(2):1–38

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Catarina Abegão Alves .

Editor information

Editors and Affiliations

Cited Case-Law

Cited Case-Law

  • ECtHR, Gómez Olmeda v. Spain, no. 61112/12, 29 March 2016, hudoc.echr.coe.int

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Alves, C.A. (2023). AI Assistance in the Courtroom and Immediacy. In: Morão, H., Tavares da Silva, R. (eds) Fairness in Criminal Appeal . Springer, Cham. https://doi.org/10.1007/978-3-031-13001-4_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-13001-4_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-13000-7

  • Online ISBN: 978-3-031-13001-4

  • eBook Packages: Law and CriminologyLaw and Criminology (R0)

Publish with us

Policies and ethics