Introduction

Law and code both fundamentally structure our societies. Law as a normative enterprise seeks to control people and things. Law’s rules determine what things can exist; what actions are permissible, prescribed, or prohibited; whose actions count; and how entities and things should be structured. Code as a technology ‘mediates, supplements, augments, monitors, regulates, facilitates, and ultimately produces collective life’ (Kitchin and Dodge 2014, p. 9). As lines of code execute in computers, they create ontologies that are then enacted and transported into coded objects, infrastructures, processes, and assemblages (Kitchin and Dodge 2014, pp. 6–7).

Advances in information processing technology and the ensuing incessant digitalisation have increased encounters between law and code. Law is increasingly entering coded processes and assemblages, seeking to control them. As code becomes embedded in increasingly numerous assemblages, it also exacts demands on law. If law is to govern code, it must be code-relevant.

With this paper we contribute to the theoretical accounts challenging law’s anthropocentric assumptions and studying what happens in the increasingly numerous encounters between law and code. Herein, we develop a theoretical framework based on a diffractive reading (Merten 2021) of Karen Barad’s agential realism and Gilbert Simondon’s philosophy of information. We interpret both code and law as ‘units of reality’ (Barad 2007, p. 25), arguing that they should be understood as agentic matter that constantly entangles with other matter in an ontogenetic transduction process (Simondon 2020 [2005]). This framing allows us to enact both law and code as material, evolving entities that have an incessant materiality and that have the power to affect other entities. Such entities do not pre-exist their entanglement; they are produced in their making, that is, they are always, already mutually co-constituted and intra-acting. While necessarily already intertwined, within this ontology law that entangles with code provides the normative components of the entanglement, and code contains the technology component.

What forms of sensemaking of encounters between code and law does our theoretical framework enable? To explore its power and demonstrate its usefulness, we present a brief case study of a series of encounters between law and code by tracking the trajectory of ‘the right to explanation’ embedded in European data protection rules.

Theoretical Framework

Law and Code

Law is a key cluster of structuring technologies in modern societies. Law’s rules, norms, principles, doctrines, and machineries extend to all crevices of everyday life, business activities, and administrative practices. Law creates and sustains things (Weinberger 1986), people, identities (Lopez 1997; Collier et al. 1995), and myths (Fitzpatrick 1992); prescribes and proscribes conduct; and structures processes, architectures, and actions. While law’s ontologies are multiple and often contested, and its mediators are also multiple (Latour 2009), variegated, and situational, its normativity is undisputed. Law establishes explicit normative expectations, seeking to actuate its ambitions and scripts (Hildebrandt 2020) by enacting multiple legalities (Kang 2019a).

Like law, code increasingly permeates our everyday lives and worlds, ‘mediating, supplementing, augmenting, monitoring, regulating, facilitating, and ultimately producing collective life’ (Kitchin and Dodge 2014, p. 9). While code is ubiquitous, pinning it down is difficult. On the one hand, code is lines of symbols in software. Code instructs computers on how to process and transform information inputs into outputs. On the other hand, code varies in scope and complexity. Code may be exceedingly simple, consisting of discrete ‘if, then’ statements that govern a single information processing process. However, coded assemblages may also twine together countless coded objects, infrastructures, and processes and contain millions of lines of sophisticated, sometimes humanly uninterpretable, code. These assemblages construct

sensoriums, each piece of software constructs ways of seeing, knowing, and doing in the world that at once contain a model of that part of the world it ostensibly pertains to and that also shape it every time it is used. (Fuller 2003, p. 19)

Even at the complex end of the spectrum, code remains multiple. For example, Kitchin and Dodge asserted that ‘code … is the manifestation of a system of thought—an expression of how the world can be captured, represented, processed, and modelled computationally with the outcome subsequently doing work in the world’ (2014, p. 26). While this may be true of traditional object-oriented code (e.g. Stroustrup (1988); Stefik (1985)), recent algorithmic technologies challenge anthropocentric assumptions. Deep learning approaches used to implement for example computer vision applications and create generative models, such as Dall-E, Midjourney, and ChatGPT, produce uninterpretable code (Lipton 2018), creating a new kind of artificial intelligence (AI) code. This code builds on an ultimately statistical, Bayesian, correlative sensemaking that distils patterns out of data (Joque 2022; Amoore 2020), but it is often alien to human brains. With our causational and symbolic cognitive capabilities, we simply cannot fully grasp what is going on.

Making Law and Code Matter

To think about, analyse, and understand the increasingly frequent encounters between law and code, we propose a new materialist theoretical framework. We propose that encounters between law and code can best be understood if law and code are both ‘given back to matter’, resensitised, and rematerialised (Pavoni et al. 2018).

This mattering has taken strides during recent decades as novel materialist accounts of law have emerged, particularly within sociolegal and critical legal scholarship (Käll 2020, 2022; Cloatre and Cowan 2019; Kang 2019b; Grear 2018; Davies 2017; Philippopoulos-Mihalopoulos 2016; Conaghan 2013), but also within the social sciences (Latour 2009). New materialist accounts have typically criticised conventional narratives for perpetuating law’s ‘sleight of hand’. Philippopoulos-Mihalopoulos (2015) explained, conventional modernist legal imaginaries (Grear 2015; see Schlag (2002) on grid and energy aesthetics) rooted in Cartesian binaries allow law to invisibilise itself and its co-constitutive link with matter. Abstract, disembodied accounts allow law to free itself from material world surroundings, hide its rich connections to humans and non-humans, and disguise its emergence ‘from non-hierarchical relationships between persons and things’ (Davies 2017, pp. 71–72). The new materialist accounts of law want to give law back to matter and recognise that law is a complex assemblage of material things, such as legal texts, books, databases, theories, libraries, humans, courts, prisons, processes, buildings, and images (Cloatre and Cowan 2019; Kang 2019a; Latour 2009).

Although code easily attracts an eerie abstractness and immateriality, software, like law, is always deeply embedded in matter. Standard narratives have stressed the symbolic, rational, and logical nature of code, disembedding it from its surroundings. Coding is abstract business, where impeccable abstract logics are applied to concrete problems. However, the standard narrative hides that code always enacts entire worlds when its ontologies are defined; builds complex material assemblages of data flows and computing assets when it is fitted into architectures and systems; is dependent on massive computing and data infrastructures and, ultimately, gets designed and deployed to solve relational problems; and arises out of particular infrastructural, technological, social, and political assemblages (Kitchin and Dodge 2014; Marino 2020). These rich material connections to other things render code in-material; that is, ‘stuff which may defy physical contact, yet which is incorporated in materiality’ (van den Boomen et al. 2009, p. 9).

To consider law and code as matter, we propose treating both as agents of Karen Barad’s agential realism and herein conduct a two-level diffractive reading. Diffractive reading is a method derived from Barad’s (2007) work, whereby two or more texts are read against each other, and the contamination resulting from this combined reading is a performative endeavour (Merten 2021), highlighting differences, boundaries, gaps, and ruptures. Accordingly, we read Barad’s agential realism against Simondon’s philosophy of information (and vice versa); and then, in the case studies, we read legal conceptions of data and decisions against registers of coding/engineering (and vice versa).

Barad’s agential realism denies the Cartesian dualisms between matter and meaning. For Barad, ‘matter does not refer to a fixed substance; rather, matter is substance in its intra-active becoming – not a thing but a doing, a congealing of agency. Matter is a stabilizing and destabilizing process of iterative intra-activity’ (Barad (2007, p. 151), cited in Davies (2017, p. 60); italics in the original). Importantly, Barad’s agential realism is intensely relational. All ‘units of reality’ (Barad 2007, p. 33) are constantly entangled with other entities. In reworking Barad’s agential realism, we frame law and code as inherently entangled units of reality. This entanglement drives a constant ontogenetic process of becoming; as a unit of reality encounters another, the two become intertwined. Entanglement is how entities exist in Barad’s universe: all entities or elements emerge as agencies, phenomena originally co-dependent and co-generated (Barad 2007). Here, law and code are components that co-emerge and manifest themselves as phenomena in their original inseparability and intra-action.

Rereading Simondon through Barad, we argue that these entanglements lead to in-formation. The entangled entities do not sit still; instead, they constantly affect and tune-in to each other (Brighenti and Pavoni 2021, p. 9; Simondon 1992). The processes of in-forming proceed through constant iterations as the entities move from one state of becoming (or ontogenesis) to another. To illustrate this process of becoming, Kitchin and Dodge (2014) expanded on the idea of ontogenesis by showing that entities transform into each other to create new ones. Introducing Simondon’s (2020 [2005]) idea of transduction (see also Mackenzie (2002)), Kitchin and Dodge (2014) argued that code is continuously brought into being via complex operations whereby it is constantly in-formed and transferred from one level (previous state of becoming–being) to another through a transduction process (Tedeschi 2023) as new technological possibilities and entanglements with new matter, such as new data flows, emerge. In this process, code builds a ‘layered formation’ that is never fixed but always in-becoming (see Barad’s (2007) conception of matter as a process or Philippopoulos-Mihalopoulos’s (2014) concept of matter’s mattering). Law works in similar ways. As law becomes entangled with new technologies or normative ideas, it evolves in an ontogenetic process, adjusting to other matter enveloping it.

Ontogenetic Transduction or How Things Change

Barad referred to the entangled ontogenesis of matter(s) as intra-action. Agentic matter emerges in intra-action; that is, ‘the mutual constitution of entangled agencies’ (Barad 2007, p. 33). Not only do law and code transduce and build up accumulating past, present, and future states of becoming–being within themselves, but (and more importantly), they also allow and condition each other to unfold in a multiplicity of (future) ways and possibilities.

While this account allows us to sense movements in matter, it leaves the entangled intra-actions devoid of an animating force. However, read diffractively, Barad’s intra-action and Simondon’s transduction allow us to transconceptualise what drives ontogenesis.

The driving force is difference-information. This idea is repurposed from Gilbert Simondon’s philosophy of information. In Simondon’s philosophy, ‘the material takes on an active dimension; it has the capacity to inform and guide the actions of the maker’ (McCullagh 2019, p. 151; italics in the original). Difference-information arises as entities carrying each their distinct information become entangled. In the process, differences in information cause difference-information—‘a differential tension’—to emerge. These tensions are released in intra-action as entities ‘tune-in to a novel dimension … so that a new coherence … appears’ (Brighenti and Pavoni 2021, p. 9).

In other words, law and code, for example, generate information as a productive–transductive difference as they go through intra-active movements between different statuses. At each new iterative level or status, law and code reach temporary internal ‘equilibrium’, and yet ontogenetically, they are constantly pushed to move towards their next level of becoming. Such temporary equilibrium is both maintained and challenged by ‘difference-information’ (or ‘tension-information’) set in a ‘non-deterministic sequence, presenting gaps and discontinuities’ (Bardin 2015, p. 4). Thus, law and code seamlessly (re)negotiate their becoming-with-the-other (Schick 2021).

The differences, which we may call gaps or ruptures, generated in this process of becoming–being (in the movements between statuses) are thus temporarily resolved when a new precarious equilibrium is established within law and code. It is not that law and code adapt themselves to new circumstances to compensate for their differences and ruptures, or their reciprocal unresolved ‘fights’ (as in Hegel’s three-step dialectics, where, in the final step, the tension is (re)solved), but rather, and more radically, that they create a new structure within themselves to solve the differences while becoming: ‘The notion of adaptation remains insufficient to account for the reality of the individual; it is in fact a question of self-creation through abrupt leaps that reform the structure of the individual’ (Simondon 2020 [2005], p. 518).

Simondon was referring to the ontogenesis of the individual, but herein, through a diffractive reading, we repurpose his conceptualisation to speculate on how the intra-action between units of reality generally takes place. The step in-between two moments of becoming, or statuses, constitutes a quantum leap, which, triggered by information (Tedeschi 2019), ‘coincides with a passing of a threshold to a qualitatively new level of existence’ (Massumi 2009, p. 43). In other words, information materially and ‘quantitatively accumulates’ by continually generating microevents that unsettle the temporary equilibrium between entities, and, consequently, make the entities (re)negotiate their being and proceed in their becoming. The turning point of this ontogenetic process occurs when a qualitative change of status (a difference) arises; that is, when the entities become something else after a certain amount of information has accumulated (an amount that needs to pass a certain threshold, or tipping point, to challenge the temporary equilibrium, trigger change, and prompt a move to a new level).

Thus, for example, the negotiation between law and code may proceed (e.g. through an accumulation of negotiated microevents) until a tipping point, or threshold (Milkoreit et al. 2018), is reached and crossed. Law and code then need to challenge their temporary equilibrium (or stability) and move towards the next level of becoming, creating new structures within and in-between themselves. We may, for example, consider the technical and legal struggles over AI explainability, whereby micro-events concerning the ontological inscrutability of code continually unsettle the development of regulations until the latter are ‘forced’ to challenge their temporary equilibrium and move to their next level of becoming once the tipping point is reached. However, regulations unsettle code by requiring changes, forcing code to move to new levels of becoming–being. This is how law and code become and co-evolve, intra-acting with each other to compensate for each other’s informational differences, gaps, and ruptures, or to iteratively and transductively generate new ones.

Explainability Intra-actions

In the preceding text, we have articulated a theoretical framework for understanding how code and law intra-act when they encounter each other. In the following sections, we conduct a diffractive, difference-oriented reading (Barad 2007) of a particular set of encounters and intra-actions between law and code to evaluate the effectiveness of the theoretical framework.

The intra-actions we trace herein emerge into view as law’s normative complex encounters increasing instances of automated decision-making and new kinds of AI code. Although law and code have intra-acted for decades, the situation reached a climax at the turn of the millennium as two contemporary transductions became apparent. Automated decision-making practices proliferated, increasingly affecting the rights and obligations of individuals. This transduction in-formed code into a significant social force with immediate consequences for people. The development irritated law. In response to the changes, the EU data protection rules gave data subjects a right, the ‘right to explanation’, for decisions made using code. However, the tension did not end there. Advances in AI introduced a novel source of friction. Code was turning uninterpretable, thus undermining potential alignment created by the right to explanation. Whatever explanations emerged became increasingly nonsensical.

In the following, we conduct a diffractive reading of a series of encounters between code and law that culminated in the current debate over the General Data Protection Regulation (GDPR), the right to explanation, and AI. To trace the intra-actions between code and law, we

  1. 1)

    identify isomorphisms, differences, gaps, and ruptures between law and code.

  2. 2)

    acknowledge the transductions that created shifts and changes in the level and status of entities while they transitioned from convention modernist law and code towards AI code and AI-ready law.

  3. 3)

    show that the struggles over explainability in the negotiation between law and code did not pre-exist but, borrowing Barad’s expression, ‘emerged through intra-actions’ (Barad 2007, p. 89).

Law and Code: Starting Positions

Traditional object-oriented computer code (e.g. Stroustrup (1988); Stefik (1985)) enacts itself as a series of stylised, human-patterned logical cognitive operations, but within computers (Carter 2007). The dry logic of ‘if, then’ statements within rigid, clear-cut abstract ontologies is the epitome of hyper-rationalised Taylorian cognition. Code achieves (or should achieve) what humans cannot: a logical and flawless process of information processing that proceeds unconstrained by the liminal spaces of human bounded rationality and its cognitive biases and failures (Brette 2022; Dupuy 2009).

Modernist, grid aesthetics (Schlag 2002) law, and traditional code seem destined to coexist peacefully. Like code, 1970s modernist law’s material formations aspire to the same logical hyper-rationality fashioned around symbolic, formal logical operations that allow code to process data and produce outputs. In place of computers, clear-headed judges operate syllogism machines to apply the rules that emerged out of Hercules J’s brain or Chaim Perelman’s immaterial but real ideal auditories to actuate law in the real world (Wróblewski 1974), much as code’s ‘if, then’ sequences push inputs through logical gates towards outputs.

Akin to computers, judges in the proverbial ‘easy cases’ take input data from the external world, subject it to a battery of legal tests couched in symbolic and logical language, and produce outputs based on logical operations. As isomorphous entities, modernist law and traditional code slip easily into a comfortable temporary equilibrium in their intra-actions and information exchanges, as there is a structural affinity between the two matters.

Surviving Mess

Despite the equilibrium, differences and ruptures brewed under the surface. Tensions arise, for example, when law encounters bad or messy code because, in addition to its rationalised logical operations, law carries a normative agenda. It is also a tool for justice, the energetic ordering and reordering of the world (Schlag 2002). Law’s will is to make things that it deems wrong right and to advance the aims it holds important. Contract rules, for example, require that code conform to contractual specifications. Criminal and tort law rules enact processes that interrogate the causes of actions and ascribe blame for particular events to specific parties when undesirable outcomes, such as personal injuries, emerge due to code running. Administrative contestation and accountability processes, at times, allow the affected parties to question the justification for various decisions. In all legal processes, law requires explanations of how things came to pass and how the explanations of what happened fit within its existing normative structures.

Thus, law becomes matter in the very act of producing its own causative and linear reality, creating narrative structures that make the world intelligible and governable but also invisibilising awkward actants (e.g. regarding neurosciences and criminality: Maoz and Yaffe (2016); Greene and Cohen (2004)) and causative patterns. Importantly, law’s yearning for explanations is a key matter in intra-actions. When law intra-acts with other matter, it requires that other matter succumb to law’s requirement for explainability, to become capable of living within law’s material explanatory structures.

Importantly, with traditional code this isomorphism held even when law imposed its normative yearning for explanations of code. Traditional code remained explainable. As a structure of action, it could coexist with law’s matter. Although the ontologies of traditional code did not always match perfectly with law’s requirements, traditional code epitomised by the object-oriented programming paradigm (e.g. Stroustrup (1988); Stefik (1985)) nevertheless unfolded in and framed a world populated with discrete objects and subjects and their causal interactions. Traditional code worked on intelligible objects and performed intelligible operations akin to law. This isomorphism allowed the law to export its narratives of what, how, and why things happened to code. In short, law’s algorithmic existence overlapped with that of traditional object-oriented computer code. Whenever law imposed its normative claim of explainability, the isomorphism between law and code offered a method for satisfying the demand. Their mutual and material co-constitution and intra-action created differences that resolved into temporary states of equilibrium—the comforting and predictable reality we desire.

Making Code Visible

Although law and traditional code appeared to maintain equilibrium despite the messiness of code, friction nevertheless arose through another pathway. Even traditional code partly invisibilises decision-making processes compared to those encountered in human decision-making. Decision-making is masked when it is embedded in computers, complex code assemblages, and copyright-protected private spaces (Bayamlıoğlu 2020, pp. 10–11), where enquiry capable of disentangling motivations and grounds is not immediately available, at least not in the same modality as with humans. This factual opacity jeopardises factual contestability, an important normative concern (Vredenburgh 2022).

In Europe, the right to explanation for automated decisions appeared to release the tension. While the primary normative motivations underlying the ‘right to explanation’ remain unclear (Edwards and Veale 2018), EU legislators decided to extend data subjects a right to ‘know the logic involved in automated decisions’ in the 1995 Data Protection Directive (95/46/EC). This right, which was later retained in the 2016 GDPR, allowed an uneasy equilibrium to emerge. The contours of the right remain contested. GDPR Articles 13(2)(f), 14(2)(g), and 15(1)(h) give data subjects a right to receive information about

the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.

Article 22, in turn, gives a data subject the right to ‘express his or her point of view and to contest’ an automated decision that significantly affects them. With this right, law has claimed all code as its target and imposed on code the obligation to make itself understandable, although what constitutes adequate explanation remains undefined.

In the literature, two positions have emerged to account for the type of right to explanation these provisions provide for data subjects. Wachter et al. (2017) argued that GDPR provisions do not, in fact, give data subjects a right to explanation. According to the authors, the Regulation falls short of requiring a detailed ex post account ‘of the logic and individual circumstances of their specific decision, such as her credit score, the data or features that were considered in her particular case, and their weighting within the decision tree or model’ (Wachter et al. 2017, p. 78). The algorithms and code itself are off limits, but controllers should ‘at least make [the code] available in compiled form for testing or reverse engineering’ (Polçák 2020, p. 407). Selbst and Powles (2017) countered this by arguing that provisions should be construed with the right to contest as the centrepiece and thus a ‘full’ explanation should be given. The issue remains undecided and is essentially the subject of a pending request for a preliminary ruling (C-203/22 Dun & Bradstreet).

Uninterpretable Code

While the right to explanation emerged, it soon became clear that it could not ensure permanent equilibrium. Technological advances dislodged the balance by allowing computer scientists to take data, subject it to advanced computerised analysis based on sophisticated methodologies, and uncover novel ways of sensing objects, understanding (cor)relations between objects, and, importantly, formulating uninterpretable decision-making algorithms. Deep learning technologies have opened avenues for code to perform a non-narrative, non-symbolic, non-causationist, and non-anthropomorphic modality of sensemaking (Joque 2022; Lipton 2018). The new AI code became capable of extracting non-symbolic correlationist abstractions of past data clusters and projecting them to perform and, at times, enact unprecedented, unintelligible alien realities (Amoore 2020). It was no longer operating within the familiar confines of intelligence objects and causal relations. In the new code, things happened and work, but no one could really explain how and why.

In technical terms, big data and deep learning code are not globally simulatable, as computer scientists frame the issue (Lipton 2018). Code cannot be reduced to logical ‘if, then’ statements that humans can simulate in their minds as the algorithms of the new code grasp the world in ways that cannot be directly mapped onto semantic, humanly understandable ontologies (Selbst and Barocas 2018; Mittelstadt 2016). The code, thus, fundamentally challenges law’s comforting meaning-mattering into dual, intelligible categories and causal narratives.

Intra-actions Between Law and Uninterpretable Code

When law and the new kinds of uninterpretable AI code encounter each other, tensions become inevitable. As code has moved on to become an unprecedented and functionally unintelligible unit of reality, it ceases to contain at least one of the qualities law requires of it. Law, as a normative complex, demands explanations and human-intelligible narratives that can justify the decisions flowing from code, but code can no longer provide them.

In terms of theory, code has moved to its next level of becoming, and its material entanglement with law is exerting a demand that law follow it and also move on. Technological advances have transduced a technological material reality that is unknown to law and left law as a normative complex facing a novel sensemaking modality that cannot be translated into law’s existing language. This jeopardises law’s ability to create order and impose a binary code of acceptable/unacceptable. In effect, law’s operation is interrupted in the zones where code is present if either law or code refuses to budge.

In terms of theory, the tension between the units generates metastable differences that, by repurposing Simondon, we call information. Law and code have become different in a way that has created an incompatibility. They cannot coexist as they stand. As units of reality, then, start moving to the next level in their becoming to resolve tensions, strong-form isomorphism between law and code seems to dissolve into a succession of replacements, or transductions, where new structures are created within the entities.

Code Budging

The first set of the replacement intra-actions affects code. After uninterpretable AI code emerged, efforts were made to interpret it. Computer scientists started to build explainable AI (XAI) methodologies. Instead of full-scale simulatability, XAI offers substitutes for ‘full’ legal explainability. These substitutes are known as post-hoc interpretability tools. Two families of techniques are prominent. The first helps to disentangle the training data features that drive the model outputs. These techniques include dataset feature importance scores that ‘try to capture how much individual features contribute, across a dataset, to a prediction’ (Murdoch et al. 2019, p. 22076). Such analyses allow analysts to understand the data clusters that algorithms identify as important and how different significant clusters correlate and interact, ultimately deepening their understanding of the algorithms by potentially quantifying the coefficients between features and calculating statistical feature importance scores. Visualisation methods may add to the intelligibility of, for example, image recognition model interpretations by displaying features that the models rely on to classify images. Finally, error and outlier analyses may contribute to debugging datasets and identifying faulty data inputs. The second family of techniques helps analysts understand the input features that affect model predictions. Again, the methodologies rely on statistical sensemaking and produce input feature significance scores (Lipton 2018).

The emergence of the XAI movement (Gunning 2019) is the first transductive response. However, this response leaves the basic tension lingering. The substitution that AI code offered and could accommodate is not perfect. The techniques produce useful information about algorithm logics, allow humans to tell stories of what factors are relevant to the decisions the models underpin, help debug problems, identify shortcomings, and otherwise make it easier for algorithms to ‘travel between laboratory and deployments in a series of questions about whether the algorithms are useful, or if they are “good enough”’ (Amoore 2020, p. 67). However, instead of obtaining a full-blown ‘legal’ explanation, code is only able to cater to the law with lengthy accounts of possible statistical clusters of past data that emerge as significant things the algorithms operate on to produce the future. As Amoore (2020) has demonstrated, the explanations remain madness—the impenetrable drivel of a lunatic—yet law cannot neglect them because they provide sound accounts, outside law.

Law Moving

Here, after the failure of the first move, a second transductive move is starting to become visible. While the first movement was a code-side microevent that left the law largely unaffected, law may be starting to budge as well in addition to further code-side movements pushing law towards a tipping point. The best example of what will likely happen was provided by Bayamlıoğlu (2022), who argued that law has multiple options available for responding to AI code within the GDPR ‘right to explanation’ framework. The right to explanation for AI code may arrive at a binary equilibrium where post-hoc interpretability emerges as either sufficient or insufficient explanation tools for law.

One option would be to re-entrench and affirm law’s existing explanatory demands and framework. Law would not have to change, but code would suffer a blow. If it could not transduce itself as compliant to what law requires, then law would have no option but to suppress code. The right to explanation would transform into a ban on automated decision-making using AI code. If this development occurs, code will inevitably encounter an immovable unit of reality in law.

Another option would be for the law to concede and accept that code can move it. Law would adjust to code, transducing explainability and allowing the halting, imperfect narratives that post-hoc interpretability tools produce to pass for explanations. This might have important implications downstream as the new legal explainability reverberates within law’s body. A tipping point would be reached, and new legal structures would be created.

However, according to Bayamlıoğlu, spectral in-between positions are also conceivable. The tensions might be resolved by multiple concurrent code- and law-side intra-active permutations. Law might transduce its explainability demands and materiality into new, nuanced assemblages. Bayamlıoğlu envisaged, for example, that developers might opt for simpler, more explainable models to optimise explainability while retaining the benefits of machine learning approaches. However, the most important adjustment might arise from implementing explainability demand with second-layer ‘institutional, administrative, or procedural’ transparency measures ‘accompanied with ex-post interpretability tools and methodologies’ or code that is black-box tested to ensure adequate functionality even when it is unexplainable (Bayamlıoğlu 2022, p. 17).

In practice, such administrative or procedural transparency measures would transduce law’s explainability requirements into a multispectral assemblage of transparency and accountability arrangements attempting to ensure acceptability. Instead of explanations, law would require code to subject itself to codes of conduct, certification processes, agreed standards, and ethical review boards. Such devices are a ‘host of self-, meta-, and coregulatory instruments and techniques [that implement] a cooperative problem-solving approach between the regulator and the regulatee’ (Bayamlıoğlu 2022, p. 15) to ensure what explainability once did—normative control over decision-making. Here, an intra-action is, again, visible. Code offered its own way of governance and law internalised it by incorporating the self-regulatory techniques and use developers already have.

Resorting to black-box testing (Bayamlıoğlu 2022, pp. 15–16) would reveal another normative transduction. In black-box testing, code is treated as an opaque object that nevertheless acts. Instead of trying to disentangle the mechanisms inside the box, black-box testing is interested in outcomes. If the outcomes are acceptable, then the black box is also acceptable. In this approach, the normative ordering role of explainability is populated by another ordering technology. Instead of enquiring why undesirable outcomes arose ex post facto, law can also seek to suppress outcomes ex ante. This is what happens when the code is subjected to black-box testing. Black-box tested code can remain unexplainable because the law exerts its normativity through another materiality: it establishes normative standards for external code behaviour as evidence through the code’s performance within testing assemblages.

Bidirectional Permutations

While law appears to be doing most of the budging, both spectral permutations appear bidirectional. Both law and code are affected in ‘a spirit of perpetual experimentation’ (Kalpokas 2019, p. 41). Bidirectionality is visible in the way the administrative or procedural transparency measures of code transduce law, but also transport law’s demands into code. The best example of this intra-action can be found in the European Union AI Act (European Commission 2021) proposal. The Act subdues the conventional modernist regulatory modalities of command and control. Instead of imposing itself on technology using conventional binary legal technologies, law has morphed into a probabilistic agent of persuasion in the proposal. Within the Article 9 risk management process, the Act deploys Bayamlıoğlu’s ‘self-, meta-, and coregulatory instruments and techniques’ to shape code. Developers must identify AI system risks and minimise them to enable residual risk levels to ‘be judged acceptable’. These instruments and techniques are part of and internal to code and its materiality—the very precursors of code’s emergence. However, law infuses instruments and techniques with its normative prerogatives. The risks to be identified are law’s risks: risks to health, safety, and fundamental rights. The outcome is law’s outcome: a residual risk level that does not impose excessive risks on the populace (Czech Presidency 2022, Article 67).

Conclusion

In this article, we conducted a diffractive reading of Barad’s agential realism and Simondon’s philosophy of information to speculate about how law and code are mutually co-constituted, inseparable, and entangled agencies, and how their intra-action works, in terms of transduction. Although law and code try to (dis)simulate their appearance as perfectly stable and fully operating systems, they intra-act as metastable systems, constantly (re)negotiating their becoming–being. They also transduce themselves into the other, to then become themselves once again in an endless, iterative, and intra-active cycle of mattering and differentiating. In this sense, law and code can be seen as material agents with the potential to contaminate and in-form each other and then move to future levels of becoming while maintaining, at each level, a precarious equilibrium. This process of constant in-forming, modulating, and intra-acting ontologically and ontogenetically comprises discontinuities, ruptures, and gaps. Information is, itself, a discontinuity in the way matter matters—a difference or rupture in the process of becoming of law and code, whereby a new, temporary stability results from the tuning-in of the previous level with the new one to form a new structure within and in-between the phenomena.

What ensues is a preliminary and foundational theoretical and empirical examination of how law and code are ontologically co-constituted and influence (intra-act with) each other. Specifically, our study sheds light on how code, albeit primarily and ontologically understood as a unit of reality ‘with roots in mathematics, formal logic and electrical engineering’ (Draude 2020, p. 21), is agentially intra-acting with law. Then the article shows how they (law and code) are instantiated in the ways in which both the GDPR and algorithmic decision-making within AI technologies co-emerge and challenge explainability. Although this intra-action appears unidirectional because law seems to impose its will on code, such appearance is deceptive. Depending on the outcomes of the negotiations, code may have, in fact, forced law to recombine itself. The material composition of explainability that law requires may have morphed, adjusted to the exigencies of code. Instead of insisting upon the old materialities of explainability that reach into courts, judges, and the juridical veridiction machinery, code may force law to acknowledge that different material entities of ordering may be sufficient. In Bayamlıoğlu’s (2022) ‘in-between’ options, post-hoc interpretability tools, and the institutional, administrative, or procedural transparency measures or black-box tests enact the explainability that law has morphed to require under the weight of the code. Law is resisting this new, visibly nonlinear way of becoming that code is forcing upon it. Law’s process of becoming has always desired to make itself invisible (Philippopoulos-Mihalopoulos 2015); now, code requires the law to show itself and at least partially abandon its binary and causal narratives. It seems as though law has reached a tipping point and is being forced to face the abrupt self-creation and reshaping of its own structure. This is how future possibilities are formed: ‘Intra-actions iteratively reconfigure what is possible and what is impossible – possibilities do not sit still’ (Barad 2007, p. 177).

Tracing intra-actions is thus an exercise in tracing differences and ruptures following adjustments, which can be either microevents that only slightly affect entities or major ones that push entities towards a tipping point and create important new structures. ‘Draw a distinction, otherwise nothing will happen at all. If you are not ready to distinguish, nothing at all is going to take place’ (Luhmann 2006, p. 43). This difference/distinction is an essentially ontological act, and, as previously mentioned, the differences in terms of gaps and ruptures that law and code generate in their intra-acting movements are essential for their survival as fully working systems. While most ruptures generated in the intra-actions between the two systems are not apparent, the visible onto-epistemological act occurs for example when a specific rupture and a line of visibility are drawn by law (e.g. for making decisions, and, thus, making something explainable for the sake of such decisions). When such a distinction is made, the intra-action between law and code is effectively shelved (crystallised) into a specific, fixed spatiotemporality, and other intra-acting movements between law and code, and, more specifically, other realities produced by code, are inevitably excluded or ignored. For AI code, the principal (ontological) characteristic of machine learning algorithms (i.e. their multiple and irreducibly heterogeneous feeding of not one but a multiplicity of different realities in terms of modes of existence and implementation) should be ignored for ‘system law’ to draw a line and make something explainable for its own sake.

Rereading a piece of the complex reality of AI code and regulations through a new materialist perspective, as we have done in this article, has some advantages. First, it provides the theoretical foundations for understanding law and code, not as two separate, unmixable entities, but as two agentic elements that intra-act and generate (transduce into) reality in their intra-acting. This establishes the basis for future theoretical and empirical endeavours that can cut ‘across natural and cultural domains, thereby eliding also the conventional division between the “sciences” (exclusively ascribed concern with nature and technology) and the “humanities” (concerned with all things human, social and cultural)’ (Braidotti (2013, p. 172), cited in Fox and Alldred (2017, p. 22)). These future scholarly endeavours may want to address not only explainability, but also discriminatory practices, informational asymmetry, and the entanglement between human and non-human agency (Kim 2020), amongst other phenomena. Second, the non-separation between and mutual contamination of law and code discard both law’s and code’s privileged positions as superior entities that shape reality while existing abstractly and separately outside it. This ‘allows for an opening up of hitherto prohibitive epistemic “closures” in the law, of legal discourse more generally, and of the world order that the law operatively seeks to maintain’ (Kotzé and Kim 2019, p. 3). We have also shown how the contamination of law can theoretically occur, via iterative operations of transduction. Third, to allow law and code to contaminate each other, we have produced concepts in Deleuze and Guattari’s sense of ‘“becomings” that disconnect habitual relationships and make new connections’ (Deleuze and Guattari (1994, p. 18) cited in Fox and Alldred (2017, p. 93)), which are, in turn, components of materiality. In this way, we have joined numerous scholars who are trying to give law back to matter. Thus, the broader fields of critical legal and sociolegal studies and their empirical applications may benefit in the future from the theoretical foundations established in this article.