1 Introduction

In the last two decades, the widespread adoption of AI-technologies by companies and governments has created new issues in legal theory and practice connected to the potential of these technologies to cause harms to individuals and groups. But how, and to what extent, are algorithms conceptualized in the legal discourse? Surprisingly, very little. While the effects and the consequences of the use of algorithms have been heavily discussed by legislators, judges, and scholars, their conceptualizations within the legal domain have received little attention. Furthermore, the many legislative acts dealing with the digital world have generally failed to law down an accepted definition of what is an algorithm.

In this paper, we will start filling this gap by explicitly looking at how the notion of algorithm is defined in contemporary case-law and legal theory (Sect. 2). We will see that what we are looking at is a plethora of different, often contrasting, uses of the term. To analyze this plethora of different definitions of ‘algorithm’ in contemporary legal practice and theory, we will employ Friedrich Waismann’s (Waismann 1945) notion of open texture (Sect. 3). We will argue that the concept of algorithm, as currently used in legal practice and theory, exhibits a substantial degree of open texture, co-determined by the open texture of the concept of algorithm itself and by the open texture inherent to legal discourse. We will analyze these two facets of the open texture of ‘algorithm’ in legal language by building upon previous work on the open texture of computation and on the open texture of law. More specifically, we will first argue that the concept of algorithm exhibits a certain degree of open texture even in its original scientific context, as the many ongoing foundational discussions on the nature and the scope of the notion of algorithm, and on the closely related notion of computation, in the philosophy and theory of computing arguably show (e.g. Shapiro 2006b; Gurevich 2014; Dean 2016; Sieg 2018; Primiero 2020). Then, we will argue that, on top of this original open texture of the concept of algorithm, the use of this concept in the legal discourse exhibits an additional layer of semantic indeterminacy due to the essential open texture of legal discourse. To characterize this second kind of open texture, we will build upon the seminal work of Hart (Hart 1961), who employed and transformed Waismann’s notion of open texture to defend the interpretative discretion of courts (cf. Bix 1991, 2019; Stauer 2019).

We will show how the substantial degree of open texture that the concept of algorithm exhibits in legal discourse, jointly determined by its two aforementioned facets, justifies the many different uses of the terms that are employed by legislators, judges, and scholars (Sect. 4). More importantly, we will argue that such an open texture is not detrimental to good legal practice, but it is instead a positive feature of our legal language, when kept within reasonable boundaries. We will substantiate our argument by virtue of a case study, in which we will analyze a recent jurisprudential case (cf. Italian Council of State, no. 7891/2021), where first and second-degree judges have carved out contrasting notions of ‘algorithm’. We will see how, thanks to our analysis of the open texture of ‘algorithm’ in legal language, we can make sense of the different decisions taken by the two judges in our case study as different sharpenings of the concept of algorithm that were contextually determined trying to balance conflicts of interest. Finally, we will use our findings to draw some general conclusions concerning the use of technical terms in legal instruments that address new technologies such as the EU AI Act (Sect. 5).

2 Algorithms in legal language

In this section, we will briefly address the conceptualization and definition of algorithms in legal language and then survey some uses of the term algorithm in contemporary legal practice. In doing so, we will be focusing on the European Union legal framework.

The notion of algorithm has a millenary history in mathematics, originating from the name of the ninth-century Persian mathematician al-Khwarizmi, who provided explicit solutions to certain kinds of equations. For centuries, the notion of algorithm was intuitively understood in different areas of mathematics as referring to a specific kind of proof-style, consisting of solving a mathematical problem by giving a list of specific instructions for its resolution. This proof-style was already common in ancient mathematics for solving practical mathematical problems, especially of a geometric and astronomical character, and it was applied to many different kinds of theoretical and applied problems in the history of mathematics (see Chabert et al. 1999).

The usage of the term ‘algorithm’ changed significantly in the last century, mainly due to two major events in the history of algorithms. First, in the thirties, the notion of algorithm received a rigorous mathematical foundation with the so-called confluence of 1936 and the related birth of computability theory (see Gandy 1988). Secondly, the twentieth-century saw also the invention and fast diffusion of digital computers. The remarkable expansion of digital computers in human society determined also a substantial broadening in the scope of the term ‘algorithm’. In fact, in the last century, the notion of algorithm exited its original technical mathematical sphere and entered the public discourse. In the public discourse, the term algorithm is often used as an umbrella term, referring indistinctly not only to algorithms in the technical sense but also to the model, target, data, applications, and hardware connected to an algorithm (see Gillespie 2016, p. 22). Moreover, in the last two decades, significant advancements in computing power and the rise of a new generation of AI-technologies, such as machine learning and especially its deep learning variant, have determined a substantial increase of interest in algorithms among companies, researchers, governments, and the general public. One of the linguistic symptoms of this increased interest in algorithms is that, nowadays, many institutions and companies routinely use the term algorithm as an adjective to promote a certain set of assumptions in the public, such as rigor, technology, impartiality, and fairness (see Gillespie 2016, pp. 23–25).

The increasing role and use of algorithms in our life has also given way to new social and legal issues. For instance, algorithms have been observed to have the potential to cause material and non-material harms to individuals and groups, especially when they are used to take decisions that can impact on fundamental rights and freedoms, such as when establishing someone’s creditworthiness, fitness for a job post, risk of re-committing criminal offences, and so on (cf. Article 29 Working Party 2018; Wachter 2019; Safak and Farrar 2021; Veale and Zanfir-Fortuna 2022; Fundamental Rights Agency 2022; AbuMusab 2023). More recently, the emergence of so-called generative AI systems has also endangered other fundamental rights and freedoms that were previously untouched, in particular intellectual property and the freedom of the arts and sciences. These issues include threats to individual rights, group rights and, in some cases, even society at large (cf. Hallinan and Martin 2020; Gordon 2021; Mantelero and Esposito 2021; Mantelero 2022; Balboni and Francis 2023; Varona and Suarez 2023; Behnam Shad 2023; Pflanzer et al. 2023).

As a result of these phenomena, the use of algorithms and algorithmic decision-making is impacting various areas of the law, and has therefore triggered much legal scholar attention and prompted judicial decisions around the world.Footnote 1 These include, among others, the fields of privacy and data protection, when algorithmic systems process personal data, administrative law, when they are used in the context of public decision-making (as shall be seen in the case study assessed in Sect. 4.1 of this paper), constitutional law, in light of the impact they may have on constitutionally protected rights and freedoms and on democracy itself (cf. Simoncini 2022; Mantelero 2022; Fundamental Rights Agency 2022), criminal procedural law, when are used for preventive policing or to predict someone’s chance to re-offend, and consumer protection, when algorithms provide or personalize services aimed at consumers (e.g., recommender systems).

Often, the use of algorithms encroaches upon more than one area of the legal domain. Take, for instance, the case of automated decision-making systems used by ride-sharing companies to allocate in real-time among its drivers the customer-requested ridesFootnote 2: these types of systems have an impact on workers’ rights, as they are used to manage the working relationship between the company and the drivers, as well as on the right of non-discrimination, in the case where work allocation has an unjustified impact on one or more protected grounds attribute (e.g., ethnicity, gender, age, etc.). Another infamous example concerns the provision of highly personalized political advertising (so-called “micro-targeting”) which has taken place in the context of the Cambridge Analytica scandal: the collection of voters’ personal data and subsequent profiling without neither knowledge nor consent, arguably impacted not only the voters’ right to privacy, but also their personal autonomy and, ultimately, the good functioning of the democratic process.

But how, and to what extent, are algorithms conceptualized in the legal discourse? In light of the above, one would expect that the legislator has set forth one or more notions of ‘algorithm’, or that its definition is at the center of the legal discourse. However, in reality little scholarly and jurisprudential attention has been paid to carving out a legally-accepted definition of the concept,Footnote 3 while legislative acts dealing with the digital world have generally failed to lay down a definition of the term. Instead, the focus of legislators, scholars and judges has mostly been on the effects and consequences of the use of algorithms, especially in light of individual rights, and less on their conceptualization within the legal domain.

A significant example of this lack of attention towards the exact definition of what an algorithm is can be found in Article 22 of the General Data Protection Regulation (GDPR),Footnote 4 arguably the most important and widely-known provision which regulates algorithmic decision-making to date.Footnote 5 According to the dominant interpretation, the first paragraph of the Article lays down a general and rebuttable prohibition on fully-automated decision-making directed towards an individualFootnote 6: “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”. However, this paragraph does not provide a definition of “algorithm” nor of “algorithmic decision-making”. Instead, the material scope of the prohibition refers to “a decision-based solely on automated processing”. This expression has been interpreted by the EU Data Protection Authorities in their 2018 Guidelines on Automated individual decision-making and Profiling as “the ability to make decisions by technological means without human involvement” (cf. Article 29 Working Party, p. 8), therefore bypassing the need to lay down a definition of algorithm, or even to use the term at all, both within the legislative text and in its interpretation via soft-law instruments. Interestingly, this is despite the fact that the word “algorithm” and its derivatives do appear quite often within the Guidelines (nineteen times), without the Eu Data Protection Authorities trying to set forth a definition, or even referring to an external one.

The above approach, which tends to bypass the need to define the term, can be found in other noteworthy pieces of legislation endeavoring to regulate algorithmic decision-making, such as the recent European Union’s Digital Services Act (“DSA”),Footnote 7 where the term “algorithm” or its derivatives are used twenty-two times across the text, often within expressions such as “algorithmic system”, without the Act ever attempting a definition thereof. Even the latest draft of the European Union’s AI ActFootnote 8 does not leverage the word ‘algorithm’ or its derivatives, within the definition of “AI System”,Footnote 9 despite algorithms being an essential component of artificial intelligence systems.

Even in legal doctrine, the conceptualization of algorithms has received little academic attention, as legal scholars have mostly addressed the consequences of the widespread adoption and use of algorithms in various domains of human activity, and less on its definition (cf. Hallinan and Martin 2020; Mantelero and Esposito 2021; Mantelero 2022; Balboni and Francis 2023). When confronted with the term, legal scholars often simply refer to the ‘basic’ notion of algorithm as “ instructions that a computer uses to perform a task” (Picciau, 2021).

In light of the above, the term ‘ algorithm’ and its derivatives are used by the legislator to refer to different concepts. Sometimes, the term is leveraged to refer to an algorithm in its broadest and most technology-neutral meaning. This use-case can be seen, for example, within Art. 5(6) of the Platform to Business Regulation (Regulation (EU) 2019/1150 or P2B Reg.):

“Providers of online intermediation services and providers of online search engines shall (...)) not be required to disclose algorithms or any information that (...) would result in the enabling of deception of consumers”

and within Art. 21(1) of the Digital Markets Act (Regulation (EU) 2022/1925):

“The Commission may also (...) require access to any data and algorithms of undertakings and information about testing, as well as requesting explanations of them”.

The broad meaning of ‘algorithm’ in this paragraph can be inferred from the use of the term together with “any information” and “any data”, respectively. In other instances, instead, the term algorithm is used to refer to technology-specific applications. Oftentimes, the term is used by the EU legislator to refer specifically to a specific kind of algorithms, such as machine-learning algorithms. Two examples of this use are, respectively, Recital 96 of the DSA and Recital 45 of the draft AI Act:

“Such a requirement may include, for example (...) data on the accuracy, functioning and testing of algorithmic systems for content moderation, recommender systems or advertising systems, including, where appropriate, training data and algorithms”

“the European health data space will facilitate non-discriminatory access to health data and the training of artificial intelligence algorithms on those datasets”.

In the lack of a legally-established definition, and given the plethora of different legal uses of the term outlined above, when being presented with a case whose solution requires establishing a definition of the term, judges had to draw from the various definitions of the term that are usually accepted within the computing field: a clear example of this shall be seen in the case study that we will addressed in Sect. 4.1. This has, in turn, led to a plurality of different uses of the concept of algorithm in case-law and contemporary legal language, mirroring (and drawing from) the plurality of definitions that exist within the computing field.

As a result, as already noted by some authors (e.g., Fidanza 2022) to date, the use of the term ‘ algorithm’ (and its derivatives) in the legal domain contains a seemingly inescapable margin of ambiguity that courts need to deal with whenever they are called upon to provide a definition.

We can schematically represent the different uses of algorithm in contemporary EU Law instruments that we survey in this section with the following table (Table 1):

Table 1 The different uses of the algorithm in contemporary EU Law instruments

3 Algorithms and open texture

After we saw, in the last section, how the concept of algorithm is discussed and conceptualized in contemporary legal language, we turn now to the question of its open texture. As already stated in the introduction, we will use the notion of open texture to clarify the nature and the normative status of the disagreements involving the concept of algorithm in contemporary legal language. Before doing that, in this section we will, first, introduce the notion of open texture of concepts as it is used in philosophy and law (Sect. 3.1) and, then, discuss whether and how this notion can be applied to the concept of algorithm (Sect. 3.2).

3.1 The notion of open texture in philosophy and law

The notion of open texture was originally introduced by Friedrich Waismann in his paper “Verifiability” (Waismann 1945), in the attempt of distinguishing healthy forms of empiricism from crude forms of radical reductionism, such as the project of translating all material statements into sense data ones.Footnote 10 According to Waismann, the former position consists in emphasizing the centrality of confirming and disconfirming scientific statements for any reasonable epistemological project and it can be successfully defended by critiques. Radical reductionism, instead, is for Waismann a completely ill-conceived epistemology, doomed to fail due to (what Wasmann calls) the open-texture of most of our empirical concepts. With the term ‘open texture’, Waismann denotes the essential incompleteness and openness of many of our empirical concepts. In contrast to some completely formalized and precise concepts, Waismann stresses the fact that is often unclear how to apply empirical concepts in unexpected situations. This essential plasticity of our empirical concepts is what causes the impossibility of a complete verification of any statements about the material world. Consequently, this impossibility determines the failure of any reductionist attempt to fully translate our material objects statements into phenomenalist language, since such a translation would require to know in advance all the conditions of verification of material statements. In Waismann’s own words:

“Open texture is a very fundamental characteristics of most, though not of all, empirical concepts, and it is this texture that prevents us from verifying conclusively most of our empirical statements. Take any material object statements. The terms that occur in it are non-exhaustive; that means that we cannot foresee completely all possible conditions in which they are to be used; there will always remain a possibility, however faint, that we have not taken into account something or other that may be relevant to their usage; and that means that we cannot foresee completely all the possible circumstances in which the statement is true or in which it is false. There will always remain a margin of uncertainty.” (Waismann 1945, p. 43)

Abstracting away the concept of open texture from Waismann’s original stance, this notion highlights the fact that the conditions of applicability of (many of) our empirical concepts are never final. For how much established the use of a certain empirical concept can be, we can always encounter new surprising conditions in which we do not know how to apply a given term. So that, as Waismann argues, even for what may appear perfectly stable empirical concepts, such as cat, friend, and gold, the possibility of uncertainty given by their open texture presents itself in the form of gigantic cats, disappearing friends, and radioactive gold (cf.Waismann 1945 pp. 41–42). In contrast to (what he takes to be) the essential closeness of definitions in a formal system, Waismann takes our empirical concepts and statements to be always revisable in light of surprising experiences.

It is important to stress that open texture denotes a different linguistic phenomenon than vagueness. As Waismann himself notes, open texture is not vagueness, but “ something like the possibility of vagueness" (Waismann 1945, p. 42). Indeed, even terms that do not exhibit any form of vagueness, such as the aforementioned case of the natural kind term ‘ gold’, can be subject to open texture, due to the possibility of future unpredictable situations in which the application of the term is not clearly warranted nor unwarranted. Open texture is then a phenomenon that even crystal clear terms of our everyday language can exhibit. Take for instance a term like mother. Despite the complete absence of vagueness in the definition and use of the term, recent technological advances in reproductive techniques arguably made the term exhibit a certain degree of open-texture. There is no clear linguistic warrant, in fact, for calling mother only the person who produces the ovum or the person who carries the fetus (see Blackburn 1996, p. 270). In what follows, we will refer to this original notion of open texture as open texture\(_{W}\).

Despite the intuitive strength of Waismann’s presentation of open texture, the exact characteristics of open texture as a semantic phenomenon are somewhat unclear. Prompted by this lack of clarity, several philosophers have tried to explicate the notion of an open texture. Arguably the most detailed explication of open texture was developed by Shapiro (2006a, b, 2013), who, in a series of works, retrieved Waismann’s notion of open texture as a pivotal part of his contextualist account of vagueness (Shapiro 2006a).Footnote 11 According to Shapiro (2006a, p. 10), open texture amounts to the possibility for a competent speaker to decide either way in a different contexts whether a certain term can be applied to a certain object. Defined in this way, open texture assures the existence of borderline cases in the application of certain terms, understanding these cases as unsettled by linguistic and pragmatic rules. In contrast to Waismann’s original discussion of open texture, Shapiro takes open texture to denote a mainly linguistic phenomenon, inherently intertwined with the existence of vagueness and borderline cases.Footnote 12 Let us refer to Shapiro’s either-way-decision version of open texture as open texture\(_{S}\).Footnote 13

Waismann’s notion of open texture has also attracted the interest of law scholars. Thanks to the seminal work of Hart (Hart 1961), in fact, the open texture of our concepts has been discussed also in the context of legal decision-making. Hart discusses open texture within his general conceptualization of law. Specifically, he used the notion of open texture to defend the interpretative discretion of courts, which should be exercised where they are confronted with real-life cases not envisaged in advance by the legislator when framing general rules. It is precisely because our language is fundamentally open textured that the judge is legitimized to tinker the legal concepts of the evolving human world:

“The open texture of law means that there are, indeed, areas of conduct where much must be left to be developed by courts or officials striking a balance, in light of circumstances, between competing interests which vary in weight from case to case." (Hart 1961, p. 135)

Despite referring explicitly to Waismann’s work, Hart, as it was already stressed by Bix (1991, 2019), seems to use the notion of open texture with a different focus and a different scope than Waismann’s original account. If, in fact, Waismann, as we stressed above, mostly focused on the epistemological and linguistical aspects of the open texture of our concepts, Hart’s discussion focuses on the practical consequence that this phenomenon has for legal interpretation. Moreover, if Waismann, in his argument against reductionism, discusses extreme situations, in which several familiar entities such as gold and cats change drammatically their properties, Hart discusses more mundane cases, such as what exactly can be considered a vehicle in a park. Hart’s notion of open texture can be then summarized as the capability of legal language to be fundamentally malleable to evolving interpretations (cf. Stauer 2019). We will refer to this specific version of the open texture as open texture\(_{H}\).

We have now seen three different versions of the notion of open texture: the original notion that can be isolated in Waismann’s writings, the philosophical explication of Shapiro, and the lightweight correlate of the notion that Hart introduced in legal literature. Despite the exact focus and scope of the notion changes from version to version, it is safe to assume that the core linguistic phenomenon denoted by these three versions is the same: the plasticity of our empirical concepts and the related revisability of our semantic assumptions about them. We will refer to this core phenomenon simply as open texture, further specifying which version of the notion we will focus on when it will be needed.

3.2 The open texture of ‘ algorithm’

Going back to the main topic of this work, we can now ask ourselves whether the phenomenon of open texture involves also the concept of algorithm.

As we recalled in Sect. 2, in the last century, scientific and technological advancements have produced a substantial change in the use of the term ‘algorithm’. The term passed, in fact, from its original, intuitively understood technical meaning, to a plethora of interconnected uses that spawn across different disciplines and different communities. This is a perfect context for a term to exhibit open texture and, indeed, it is not difficult to find situations in which we are not sure whether the term algorithm applies or not. For instance, when we speak of machine learning algorithms, does the term algorithm refer only to the actual learning instructions of the system or, instead, does it encompass also the training data with which the algorithm learns? Or, in the case of complex systems of algorithmic decision-making, such as the infamous TikTok algorithm that many members of the public and many sociologists refer to, which (set of) component(s) of this algorithmic system is the algorithm?

Despite these intuitive examples of open-texture that the term algorithm allegedly exhibits, some might be tempted to object to the open-texture of this term, due to the semantic specificity of the term algorithm. Differently from the other examples of open texture that we recalled in this section, in fact, the term algorithm enjoys a mathematical definition in computability theory that seems, prima facie, perfectly clear and, since it is defined in a formal system, also immune to empirical revision or expansion. Indeed, Waismann, in his presentation of open texture, explicitly contrasts the open texture of our empirical concept with (what he takes to be) the certainty of the applicability conditions of formal concepts. Then, how can the concept of algorithm exhibit open texture when computability theory arguably gave this concept a precise formal extension? Presented with a possible situation of open-texture of algorithm, cannot we just rely on the formal definition of an algorithm to fix the conditions of applicability of the term?

This possible objection underestimates the scope and the depth of the phenomenon of open texture. Even if a perfect mathematical definition of a term is available, the applicability of this formal definition of an informal situation that originated in a natural language is not straightforward nor univocal. Indeed, in philosophy of computation, there are a lot of discussions over the status of the so-called Church-Turing thesis (e.g., Shapiro 2006b, 2013; Sieg 2009, 2013; Copeland and Shagrir 2019; Quinon 2019; De Benedetto 2021; Papayannopoulos 2023), i.e., the thesis equating our informal notion of effective calculability with (one of) our formal notion(s) of classical computability (i.e., Turing computability, general recursiveness, Post computability, and the like). Within these discussions, the intuitive concept of computation has been argued to exhibit a certain degree of openness in its application and exact definition (e.g., Sieg 2009, 2013; Shapiro 2013; Quinon 2019; De Benedetto 2021) that cannot be found in (one of) its formal equivalent(s). Our informal notion of effective calculability denotes, in fact, our intuitive concept of what can be calculated without any ingenuity. As Shapiro (2006b, 2013) argues, it does not seem extremely clear what are the applicability conditions of such intuitive definition, as they drastically hinge on what ‘can’ is meant to denote. Which agent is supposed to do the calculation? How many resources (i.e. effort, computational power) is the agent allowed to use? How much time does the agent have to perform the calculation? For most of the millennial history of the concept of algorithm there was no agreement on how to answer these questions and, without such an agreement, the exact conditions of applicability of effective calculability remain unclear. Analogous questions can be asked for our intuitive concept of algorithm, commonly understood in mathematics and computer science as referring to the intensional specification of an extensional computation: algorithms are the instructions that specify a given computational process (cf. any standard presentation of the mathematical theory of algorithms, such as Péter 1957; Malc’ev 1970; Uspensky and Semenov 1993; Knuth 1997). This intuitive definition leaves many elements of the computational process related to an algorithm undefined. For instance, at which level of abstraction we should identify these instructions (cf. Moschovakis 1998, 2001; Gurevich 2000, 2015; Dershowitz and Gurevich 2008; Sieg 2009, 2013; Dean 2016; Papayannopoulos 2023)? Or, which class of computations (e.g., physical, abstract, effective, analog, etc.) are the intended scope of our intuitive notion of algorithm (cf. Antonutti Marfori and Horsten 2018; Piccinini 2015; Shagrir 1997, 2022; Gurevich 2019; Maley 2023)?

In this way, we can see how the existence of a (set of) formal definition(s) of the algorithm in computability theory does not represent an obstacle for the open texture character of this term. Indeed, we saw in this section that the concept of algorithm arguably exhibits a significant degree of open texture, in both Waismann’s and Shapiro’s senses of the notion (cf. Sect. 3.1). As prescribed by Waismann’s open texture\(_{W}\), the conditions of applicability of ‘ algorithm’ seem to be not fixed nor fixable in advance, but they seem to be revisable and contestable in light of technological advancement (e.g., the rise of machine learning systems) or societal changes (e.g., the public discussions on algorithmic systems). Moreover, as prescribed by Shapiro’s open texture\(_{S}\), competent practitioners (indeed even experts of computability theory and the philosophy of computing) often decided either way on whether something is an algorithm or not, demonstrating the existence of borderline cases of algorithm in both technical and everyday uses of the term.

4 The open texture of ‘ algorithm’ in legal language

After having discussed the conceptualization and definition of algorithms in legal language in Sect. 2 and the open-textured nature of the concept of algorithm in Sect. 3, we will now examine whether we can make sense of the different, often contrasting notions of algorithm that can be found in contemporary legal theory and practice with the notion of open texture.

Several examples of uses of algorithm that we presented in Sect. 2 appear to be legitimate domain-specific sharpenings of this notion, justified by the considerable degree of open texture exhibited by the notion of algorithm. For instance, the use of “ algorithms" found in Recital 96 of the DSA can be seen as a contextual sharpening of the concept of algorithm that refers only to a certain class of algorithms, namely, the ones that are employed to train machine-learning-based systems. The same goes for the use of the term algorithm in Recital 45 of the draft AI Act.

Yet, we saw in Sect. 3.1 that legal language seems to possess a particular kind of open texture, germane to its own discourse. Does Hart’s legal kind of open texture, i.e., what we called open texture\(_{H}\), affect the uses of algorithm in legal language? Indeed, we will see in the next subsection that this is arguably the case, with the help of a case study.

4.1 A case study

In this subsection, we will examine a concrete case in which two Italian administrative courts were confronted with the necessity of carving out a legal notion of ‘algorithm’. As we will see, in line with the open-textured characteristics of the term, the courts have provided contrasting definitions of the algorithm, resulting in the ruling of different decisions in the case which was presented to them.

This exercise allows us to underline the concrete manifestation of open-textured features for the notion of “ algorithm" in the legal domain, and how open texture explains the plurality of different uses of the concept of algorithm in contemporary legal Language. Essentially, to resolve the dispute at hand, the judges were confronted with a question: in the specific technical-medical context of the case at issue, should we understand algorithms as computing programs that require human operation, or rather, as automated decision-making systems? As we will see, the two judges of our case study answered differently. By employing the concept of open texture, we will conceptualize the different decisions of the two judges as possible sharpenings of the same open-textured concept. Consequently, this perspective helps us to explain the two contrasting accounts of the term.

4.1.1 The judgement of the Lombardy Regional Administrative Court: a “plain" definition of algorithm

The facts of the case originated in a public tender, which took place during 2021, concerning the supply of pacemakers and defibrillators for the benefit of Italian regional public healthcare institutions. One of the items of the public tender included the supply of "high-end DDDR pacemakers" (Italian Council of State, Sect. III—Judgement 4–25 November 2021, no. 7891, facts of the case, para. 1.)Footnote 14 Specifically, the letter of invitation and the technical specifications had indicated, among the evaluation criteria for the tender, the parameter called "Algorithm of prevention + treatment of atrial tachyarrhythmias" to which fifteen points were assigned in case of the presence of both algorithms and seven points in the case of "presence of only the prevention algorithm or only the treatment of atrial tachyarrhythmias" (Judgement 7891/2021, facts of the case, para. 2.).

The commission entrusted with the evaluation of the public tender decided to attribute the maximum score of fifteen only in the case of algorithms that presented "automated" characteristics. In particular, the commission considered the requisite of proposing algorithms both for the prevention and treatment of the disease to be satisfied by a company called Microport CRM S.r.l. (“Microport”), which presented an algorithm capable of “automatically allowing to contrast the prefibrillatory rhythm constituted by the recognition of frequent atrial ectopias and treated by reduction/homogenization of the atrial refractory periods”, thereby attributing the maximum score of fifteen points to Microport’s solution (Judgement 7891/2021, para. 2).

Another company, called Abbott Medical Italia S.r.l. (“Abbott”), had instead proposed a solution called “NIPS” (which stands for “Noninvasive Program Stimulation”), consisting of a test to be activated in cardiology clinics through an external programmer, to be then used by a clinical operator to temporarily take control of the pacemaker and to impart, based on the real-time assessment of the heart rhythm, a sequence of stimuli for therapeutic purposes; during this process, the normal sensing and automatic response functions of the pacemaker are temporarily inhibited (cf. Judgement 7891/2021, para. 9.2). This solution does not automatically correct arrhythmias when the dysfunction arises and, as a result, NIPS lacks the automated capabilities of Microport’s algorithm, indicated above.

Due to this lack of automation, Abbott’s solution was not awarded the full fifteen points by the public tender commission, therefore placing second in the final ranking (cf. Judgement 7891/2021, para. 2.). In this respect, the public tender commission seems to have considered that a necessary feature of the notion of algorithm was the ability to independently analyze external inputs and activate itself automatically, without human intervention (Fidanza 2022, p. 9).

Abbott was not satisfied with this result and disagreed with the commission’s reasoning, seeking the annulment of the commission’s decision by bringing proceedings in front of the competent Regional Administrative Court (Tribunale Amministrativo Regionale, or “TAR”), i.e., the TAR of Lombardy.

In its judgement, the court specified firstly that "the tender only required the presence of a treatment algorithm (without specifying anything else)" and thus, to assess whether the commission had correctly interpreted the requirements of the public tender, sought to define the concept of algorithm by drawing from academic literature, in the absence of an established legal definition. In the TAR’s view, the term algorithm "simply refers to a finite sequence of instructions, well-defined and unambiguous, so that they can be performed mechanically and such as to produce a certain result (such as solving a problem or performing a calculation and, in this case, treating an arrhythmia)” (Judgement 7891/2021, para. 3),Footnote 15 therefore adopting the common or intuitive notion of algorithm (Primiero 2020, p. 69). Having carved out the concept, the court applied it to the facts of the case, concluding that “Abbott correctly objected to the erroneous assessment of the tender commission which—despite the presence of an arrhythmia treatment algorithm in Abbot’s device (i.e., the NIPS algorithm, which can be plainly defined as such)—attributed only 7 points instead of 15 to it. In fact, the commission confused, unduly overlapping them, the concept of algorithm with that of automatic start of the treatment" (Judgement 7891/2021, para. 4).

In line with its reasoning, the TAR considered that the circumstance that the input phase of Abbott’s solution was performed by a human did not prevent it from being qualified as an algorithm, according to the common notion. As a result, the TAR reformed the tender commission’s decision, thereby assigning the full 15 points to Abbott’s solution.

4.1.2 The Judgement of the Council of State: leveraging the open-texture of algorithms

Microport was not satisfied with the TAR decision and brought an appeal to the Council of State (Consiglio di Stato or “CdS”), that is, the highest Italian administrative court. In its ruling, published on 25 November 2021, the court criticized the notion of algorithm adopted by the TAR to solve the case in the first degree of the proceeding.

In doing so the CdS did not deny that the definition carved out by the TAR indeed corresponds to the common and generally accepted the notion of algorithm (cf. Judgement 7891/2021, para. 9.1). Rather, it deemed that the application of such a notion was incapable of adequately addressing and solving the case at hand. In particular, according to the CdS, when the notion of algorithm is applied to technological systems, such as the high-end pacemakers under discussion, it is “inextricably connected to the concept of automation, i.e., to action and control systems aiming at reducing human intervention” (Judgement 7891/2021, para. 9.1). When used in the context of technological systems, the notion of algorithm therefore requires the presence of a high degree of automation in the functioning of the system, aimed at reducing the human intervention needed to obtain the desired output. In the words of the court, the notion of algorithm entails the presence of “instructions capable of providing an efficient degree of automation” when applied in the context of technological systems (cf. Judgement 7891/2021, para. 9.2).

Having established this further and case-specific definition of algorithm, the CdS applied it to the facts under consideration, thereby observing that Abbott’s solution—contrary to Microport’s—did not allow the automated treatment of atrial tachyarrhythmias (cf. Judgement 7891/2021, para. 9.3). Consequently, the court concurred with the initial decision of the tender Commission, which had awarded the full 15 points only to Microport’s solution, on account of its automated characteristics.

We can schematically present the different notions of algorithms that the two judges carved-out with the following table (Table 2).

Table 2 The different notions of algorithm employed by the two judges

4.1.3 An open-texture-based reconstruction

First of all, it is worth noting that it is not an uncommon occurrence for judges to be confronted with the necessity of having to establish the meaning of terms pertaining to a specific technical sector, to solve a judicial controversy, when the applicable legislation has not established an explicit notion thereof. Presented with this situation, the judge usually relies on academic literature pertaining to the relevant technical field (often with the assistance of a judicial-appointed expert). In this respect, at first glance the TAR seems to have carried out said operation flawlessly. In fact, the TAR applied the definition of “algorithm” which is by far the most common in the literature. However, while technically correct, the TAR arguably missed the mark, as the public administration was clearly looking for an algorithm with certain advanced feature, such as automation, which are not included in the common notion of the term. Conversely, while the CdS’ definition deviates from the common judicial practice outlined above, it does so in the name of allowing a correct allocation of the interests of the parties, which is ultimately in line with the function of the judicial power.

Let us now look at what happened in our case study through the lenses of the different notions of open texture that we characterized in Sect. 3.1 and at whether the above normative conclusions can still be applicable from our open-texture-based perspective.

In terms of Waismann’s and Shapiro’s epistemic-linguistic notions of open texture, i.e., what we called open texture\(_{W}\) and open texture\(_{S}\), the TAR’s definition of algorithm, corresponding to the most commonly accepted usage and meaning exhibits a considerable amount of open texture, allowing several different sharpenings. The CdS’ definition, instead, can be considered an extremely context-dependent sharpening of the concept of algorithm, in that it involves a notion (i.e., automation) that is not usually a defining feature of algorithms. Taken out of context, this notion is quite inadequate (e.g., which degree of automation is necessary for an algorithm to be classified as such?). However, as we saw, in the context of the tender, the notion of automation was indeed a required feature of the algorithm that the public administration was seeking. So that we can understand this context-dependent sharpening as arising from the CdS aim of reaching a decision capable of correctly balancing the interests at stake in the public tender. This is why the CDS leveraged the open texture of the law, i.e., what we called open texture\(_{H}\), producing this specific sharpening. In other words, the healthcare system was indeed looking for a solution which minimized human intervention and, as such, the interests of the parties trumped formal correctness.

The CdS decision makes therefore apparent that, in the legal domain, the necessity to correctly allocate the various rights and interests at stake creates a “double open texture” phenomenon: the inherent open-texture of this term (within the meaning of open texture\(_{W}\) and open texture\(_{S}\)) is combined with the open texture of the law (open texture\(_{H}\)).

Our case study can teach us also some general lessons on how to deal with the semantic uncertainty of technical terms like algorithm in the legal domain. As we have seen, both the plethora of meanings of algorithms in the legal domain, outlined in 2 above, and the different definitions outlined by the judges within the ruling examined above, reflect a general difficulty not only when defining an “algorithm”, but also in the very use of such term in the legal domain. The consequences of this uncertainty are of clear importance in the legal field, where definitions have the fundamental role of delimiting the field of application of legal rules.

In sum, we argue that the usefulness of setting clear-cut definitions of “algorithms” in the legal domain, much like the notion of “artificial intelligence system” provided by the AI Act, should not be overemphasized. In fact, as argued above, this judgement makes apparent that the legal operator (in this case, the judge) should carve out the meaning on a case-by-case basis, thereby leveraging the double open texture exhibited by the term within the legal domain. In line with the above, we argue that the legal field does not need legally-mandated definitions of algorithm. At the same time, we encourage further research concerning the judicial elaboration of this notion in different sectors of the law.

5 Conclusion

Let us recap the main steps of the present work. We started by problematizing the multiple, often contrasting senses in which ‘ algorithm’ is used in contemporary legal language. We resorted to the concept of open texture, as introduced by Waismann and further developed in philosophy of language by Shapiro and in philosophy of law by Hart. We argued that the multiple different uses of ‘algorithm’ in legal language are a byproduct of it radical degree of open texture, co-determined by its own meaning and by the legal language. We substantiated our argument by looking at a recent judicial case in Italian law, where judges had to leverage the open texture of ‘algorithm’ for taking the correct decision.

The open texture of the concept of algorithm in contemporary legal language arguably demonstrates the pervasiveness and the ineliminability of the phenomenon of open texture in legal language, even in the case of prima facie scientifically well-defined technical concepts, such as algorithm. Nevertheless, our case study arguably supports the use of context-specific definitions of technical terms such as algorithm in legislation, to avoid legal uncertainty and excessive judicial intervention. Assessing the adequacy of this maxim in other technology regulations, such as the aforementioned upcoming EU AI Act, represents a promising extension for future work.