1 Introduction

Data doubles, algorithmic subjects, digital selves (Haggerty and Ericson 2000; Aradau and Blanke 2018; Lupton 2020): as these new cybernetic figures enter the terrain of our thinking, the ‘spiral dance’ of the cyborg (celebrated by Haraway as an escape from the grasp of modernity) seems to have become a somewhat more anxious routine (Haraway 1991 pp. 149, 181).Footnote 1 If, for Haraway, the ‘hybrids of machine and organism’ suggested a way out of the ‘maze of dualisms’ in which the modern subject is trapped (Haraway 1991 p. 181), contemporary accounts of the more-than-human, data-driven avatars that are enacted in practices of algorithmic governance signal not the ontological emancipation but the epidermalization, dissolution, and displacement of the subject (Browne 2010 p. 131, Rouvroy 2012). Building on these strands of literature, this contribution traces how ‘algorithmic governmentality’ or ‘governance by data’ thereby disrupt some central tenets of social relationality and collectivity embedded in modernist ideals of the liberal legal subject (Rouvroy and Stiegler 2016). We show this by exploring digital bordering practices, to demonstrate how at the ‘virtual border’, the human subject is splintered into a ‘cluster’ of pulsing patterns distilled from disaggregated data. Yet, while diagnosing a disruption of the liberal ideals of the subject, we keep a distance from precisely these ideals—ideals that flourish in attempts to reinvigorate the ‘rule of law’ and to attune its values of human autonomy, publicness, or democratic inclusion to new computational settings (Hildebrandt 2020). We trace the algorithmic reconfiguration of sociality without reifying modernist notions of liberal subjectivity.

The argument unfolds in three steps. First, we recount how practices of subjectivation have been problematized in critical theory—with sites of subject-making being subversively (re)appropriated—to counter modes of neoliberal governmentality. We explore how such projects are complicated by emergent practices of ‘governance by data’ where the conditions of collective recognition and being-in-common—on which the activation of ‘subversive subjectivities’ hinges—are eroded. Drawing on an analysis of algorithmic risk assessment at the ‘virtual border’, the article identifies the formation of ‘clusters’ as objects of governance and modes of relationality that increasingly displace alternative enactments of (individual or collective) subjectivity. Second, we observe that this displacement can be read together with accounts on the degradation of sociality resulting from algorithmic governance and the crisis for the ‘rule of law’ that this entails. Finally, however, we resist reaffirming modernist modes of representation that are called for in these strands of scholarship and propose a style of ‘legal imagination’ that stays with fugitive, opaque and experimental modes of being and becoming. This be(com)ing does not strive for more transparency, autonomy, and inclusion under the guise of formal equality but seeks to activate the indeterminacies of algorithmic calculation—its forks, thresholds, weights and apertures—as openings toward different expressions of sociality and being-in-common. With caution and awareness of our positionality as White scholars, this attempt at inviting a new ‘legal imagination’—in the theme of this special issue—draws on different strands of critical Black studies.

2 Subjects and clusters: new modes of algorithmic be(com)ing

In social and critical theory, expressions of power and domination (as well as resistance, solidarity and emancipation) have long been perceived as entangled with the production of subjectivity—with the techniques of subjection or subjectification that shape social and legal agents in service of specific ideological formations or governance scripts (Althusser 1971; Foucault 1982; Butler 2000; Samaddar 2010).Footnote 2 In this vein, our analysis aligns with Foucault’s interest in ‘the different modes by which, in our culture, human beings are made subjects’ (Foucault 1982 p. 777)—in the relational and material processes of be(com)ing subject. The focus on materiality is important here: inspired by contemporary strands of new materialism and STS, we are particularly interested in how subjects are performatively enacted through sociotechnical (and increasingly data-driven) practices of governing.Footnote 3 While technologies of seeing, sensing, and sorting have long been perceived as central to the making and modulation of subjects in disciplinary or biopolitical regimes of rule (Hacking 2006), we think with those who argue that big data and associated algorithmic analytics are changing the conditions of visibility and compositions of sociality in these political formations—changes that spark specific legal anxieties (Aradau and Blanke 2018; Isin and Ruppert 2020; Johns 2021; Amaro 2021).

Lingering a little longer with these prior accounts, we share their perspective on power as productive: as an alignment of rationalities and techniques that engender new images of the political and social subject (Mezzadra and Neilson 2013, pp. 250ff).Footnote 4 Brown, for example, qualifies neoliberalism as a ‘constructivist project’ that is not primarily aimed at regulating economic relations but at cultivating the figure of the homo economicus as central vector of social and political life (Brown 2005, p. 40). This claim has powerfully been reframed by Pahuja (2011) in the context of international law’s enduring colonial commitments. To understand the ‘powerful technology of universalization’ at work in international law and development, Pahuja notes, we need to account for how the ‘poor person’ is ‘interpreted as a very specific (legal) subject, the homo œconomicus’: ‘[t]his proto-capitalist subject, possessed of an agency that is entrepreneurial and not radical’ (Pahuja 2011, pp. 219–221). In a similar vein, Urueña (2012) gives an account of the ‘fragmented subjectivities’ emerging from international law’s increasing preoccupation with the individual. Suspicious of those celebrating the empowering or emancipatory status of individuals as subjects of international law,Footnote 5 Urueña (2012) observes the alienation of those caught between the conflicting subjectivities of self-improvement and dependency—between the homo economicus of international economic law and the ‘empty souls’ of human rights law who can only ever be ‘spoken for’. The preoccupation of international lawyers with the productive power of subject-making is perhaps most explicit in critiques on quantification or governance by indicators. Situating these metrological techniques in a longer lineage of governmentality, Merry (2011, p. 90) traces how ‘the indicator comes to shape subjectivity’—engaging the person in ‘governing himself or herself in terms of standards set by others’.Footnote 6 The common threads in these critical accounts point to the production of a neoliberal subjectivity where discipline is internalized, multiplicity erased, and agency eroded.

Yet, ‘writing in the spirit of Marx’, Mezzadra and Neilson (2013, p. 252)Footnote 7 signal how these modes of subject-making are inevitably tied to the ‘theme of the liberation of subjectivity’—‘of revolutionary subjectivity’. The ‘subject’, they assert, is marked by an ‘excess that can never be fully expropriated’: a defiant and disruptive subjectivity emerging ‘in relation to and in tension with the figures … that are its correlates in the legal and political realms’ (Mezzadra and Neilson 2013, p. 252, 264). Rancière (1998, pp 16, 28), in this vein, situates the ‘political’ sphere (in contrast to the regimes of ‘police’) in the ‘destabilizing and subversive … subjectivation of the part with no part’—a reactivation of the ‘contingency of equality’.Footnote 8 These (collective) acts of disruption and defiance are often explicitly oriented around or against existing legal inscriptions or classifications. When the constitutional right to ‘human dignity’ devolved into managerial ‘metrologies of dignity’ in the South African ‘toilet wars’, Marks (2021) observed, affected communities mobilized around defiant expressions of indignity, consciously shedding the veil of their abstract legal subjectivity to underline the injustices it sustained. In a different setting, Mann (2021, p. 147) observed how migrants dwelling in Europe’s borderlands often engaged in ‘strategies of concealment and misrepresentation’, whereby the formal status of legal citizenship is discarded and traded for the ‘costumes of people who enjoy the individual protections granted to humans as such’ (Mann retells an encounter with Dominican asylum seekers disguised as Somalis).Footnote 9 These ‘border masquerades’, as Mann observes, are subversive and strategic substitutions between different sources of legal personhood (the ‘citizen’ and the ‘human’) that seek to disrupt the technologies of exclusion grafted on the former with (consciously fraudulent) imaginations of defiant universality expressed in the latter (Mann 2021, p. 154).Footnote 10 With both Marks and Mann, we thereby encounter expressions of the excess to which Mezzadra and Neilson (2013, p 275) have referred: emergent forms of subjectivity that target and (re)appropriate existing legal inscriptions. Importantly, for such expressions to obtain a durable and organized political form, particular material conditions need to be in place for the translation and creation of a collective subject in the making. These can relate to experiences of social recognition, commonality of purpose, or a shared sense of duress on which the formation of subversive political subjectivities hinges.Footnote 11 In relation to the biopolitical or neoliberal regimes of rule to which we have briefly alluded above (those built, for example, around the statistical techniques of classifying and calibrating populations) (Foucault 2008), such conditions of collective action were always implicit. As Desrosières (2002, p. 401) argued, statistics can be politicized and polemicized precisely because they work with and enact ‘stable collective objects, or the production of categories that can become evaluated and contested publicly’.Footnote 12 Subversive subjectivities, in this sense, are always latent in the statistical classifier.

With the rise of ‘algorithmic governmentality’,Footnote 13 we want to argue, these practices of classification and subjectivation have taken novel forms. Big data does subjects differently. Our interest in tracing such new forms of subject-making is sparked first of all by the invitation of this special issue to reflect on how new digital technologies transform and trouble key legal concepts and categories, which we observe in the reconfiguration of the legal subject.Footnote 14 This inquiry is also motivated by the observation that the material making of subjects—how people are seen, sensed and sorted in specific governance regimes—entails dynamics of power, inequality and exclusion that we consider to be insufficiently addressed in current attempts to regulate, proceduralize, or fix new digital technologies. Finally, we are particularly interested in how the use of big data and data analytics alters (and potentially erodes) the conditions for collective action and political subjectivation briefly touched upon above.

One prominent site where these new digital division materialize is the ‘virtual border’: the assemblage of databases, algorithmic processes, and sensors that surveil and sort people on the move (many more sites could be identified, of course, from online advertising to tools of military targeting) (Van Den Meerssche 2022). In a recent strategy on ‘the use of artificial intelligence in border control, migration, and security’, the European Commission identified nine areas of opportunity for artificial intelligence (AI) in this domain, ranging from ‘vulnerability assessments’ in asylum applications or the use of data analytics to detect ‘irregular travel patterns’ to the algorithmic screening and ‘triaging’ of visa applications and border crossings (European Commission 2020). Displaying the pre-emptive logic of contemporary security practices, the stated objective of these AI systems, as a recent report of the European Parliament noted, is not just to ‘verify[] and identify[] known persons’, but ‘to identify unknown persons of interest based on specific data-based risk profiles’ (European Parliament 2021).Footnote 15 In this sense, the strategy states, AI would enable processes of ‘[r]isk assessment performed on a group of individuals with the general aim to find patterns and cluster individuals for further investigation’ (European Commission 2020 p. 10 emphases added).

This orientation towards ‘patterns’ and ‘clusters’ is elaborated in the technical sections of the strategy, which clarify that the ‘classification categories’ dividing people at the border ‘could be defined based on a risk threshold or specific indicators’ or could be ‘less pre-defined where applications are grouped based on some “learned” similarity’ (European Commission 2020 p. 89). In the latter case, the use case description states, unsupervised AI models would be employed, using ‘vector space models’ to ‘partition data into clusters’ (European Commission 2020).Footnote 16 The key ‘benefit of using AI’, the strategy notes, is a flexibility in ‘uncovering correlations between input data and classification outcomes’ while also ‘allowing for more variety in input’ (European Commission 2020).Footnote 17 The extension in what can count as a feature is also an advantage of such vector space models. As Mackenzie (2015, p. 434) explains: ‘[t]he difference between classical statistics, which [seek] to explain associations between variables, and machine learning, which seeks to explore high-dimensional patterns, arises because vector spaces juxtapose almost any number of features’.Footnote 18 The inclusion of ever-more heterogenous data is central in the strategy’s aim to ‘identify patterns which were not observed as “strange” before’—a distillation of meaningful attributes and features through the ‘unsupervised uncovering of correlations’ (European Commission 2020, p. 90).Footnote 19 It is the ‘uncovered correlation’ and not (only) the legal classification that is dividing people at the digital border.

These strategic ambitions have led to a range of EU-funded experiments, such as the iBorderCtrl and Tresspass projects, that developed AI systems to ‘compress all data into actionable risk scores’ (Van Den Meerssche 2022). The experimental character of these changes is significant, and signals a broader shift in policy. Aradau’s claim that ‘experimentality’ has become ‘a mode of governance in borderzones’—an experimentality attuned to the space of play of machine learning models—resonates in the recent UK Border Strategy, which sets out that ‘the private sector must take the lead on border innovation, with government … creating an environment that encourages experimentation and technology adoption’ (HM Government 2020; Aradau 2022). At the ‘virtual border’, the power to divide and distribute is increasingly wielded in experimental form.

Yet, we also already observe the workings of this logic in the legal and infrastructural development of concrete border control programs, both in the EU and the UK. The European Travel Information and Authorisation System (ETIAS) that is currently under construction, for example, has developed its decision-making process around the identification of ‘security risks’, ‘illegal immigration risks’, and ‘high epidemic risks’ (Council Regulation 2018/1240, OJ L 236/1, Article 4). The ETIAS regulation provides that these risk assessments will be based on ‘an algorithm enabling profiling’ based on ‘specific risk indicators’ (Council Regulation 2018/1240, OJ L 236/1, Article 33). The latter are to be created by Frontex, based on the specific ‘risks’ defined by the European Commission. In a recent delegated act, the Commission clarified that such ‘risks’ would be determined on the basis of ‘particular sets of characteristics’ distilled from data that signal a heightened propensity to patterns of refusal of entry, overstaying, identified security threats or observed disease outbreaks (European Commission 2021, articles 3–6).Footnote 20 This observation of particular ‘characteristics’—defined as a ‘distinguishing sets of observable qualities or properties’ that could be ‘attributed’ to ‘specific groups of travelers’—would also be enriched with the identification of ‘any correlation with information collected through their application files’ (European Commission 2021). In this sense, the practice of pattern detection will necessarily be a ‘live’ process of iterative review and adaptation through which observed attributes and correlations become part of the open-ended taxonomy of ‘risk’ (European Commission 2021). The role of Frontex is to develop ‘risk indicators’ on this basis—an ‘algorithm enabling profiling’—which would allow for the automated processing of applications (Council Regulation 2018/1240, OJ L 236/1, Articles 22 and 33). In doing so, it is assisted by the recently created EU Innovation Hub for Internal Security, which is involved in exploring how AI will be deployed to optimize the performance of profiles (EU Innovation Hub for Internal Security 2021).Footnote 21 These ‘profiles’ signal the particular practice of subject-making as a new mode of algorithmic be(com)ing at the ‘virtual border’: decisions are not based on stable legal signifiers but on ‘characteristics’ and ‘correlations’ temporarily tied to observed patterns and propensities in data. In the computational calculus of ‘risk assessment’ envisaged here, the subject that is being enacted is a cluster of inferred attributes: a relational association between dispersed data points (or in more technical terms: a spatial proximity between emergent features in a vector space).

The cluster is an increasingly prevalent form of algorithmic subject-making that appears far beyond the spaces of the ‘virtual border’.Footnote 22 In line with the account above, several salient (and intertwined) characteristics of the cluster can be distilled, which display how it ‘troubles the fundamentals of legal and social representation and relation’ (Johns 2021, p. 66). The cluster is a fluid, relational subject. The ‘specific groups of travelers’ referred to above are only tentatively and temporarily tied together based on attributes and patterns extracted from data and always open to revision.Footnote 23 This underlines how important the continuous collection of data is in the composition of clusters: in the ever-expanding vector space, sensory tools are key to figure and refigure a subject that is inherently incomplete and continuously in (re)composition.Footnote 24 In the case of the experimental iBorderCtrl and Tresspass projects referred to above, for example, this includes on-site observations, biometrics and biomarkers (emotion AI), and social media scraping. This subject is not defined by stable and identifiable features but emerges from relational ties and associations between dispersed data points—from patterns and propensities. This fluid, relational character of the cluster limits the forms of political subjectivation mentioned above: the ‘translation of the common’ is troubled by the impossibility of encounter and mutual recognition in the calculus of the cluster (Aradau and Blanke 2018, p. 21, Fourcade and Johns 2020, p. 824). As Amoore (2021, p. 6) observes: ‘[t]o belong to a cluster is not about resemblance, [or] common characteristics’, but merely reflects ‘a spatialized proximity or distance’.

The cluster is an emergent, composite subject. In contrast to the rigid categories and classifications that are at the core of critiques of indicators (and more general concerns on biopolitics) (Rose 1998; Merry 2011; Hacking 2002), the ‘specific groups of travelers’ referred to above are composed inductively through inferred ‘characteristics’: the taxonomy of risk is itself enacted in the learning process. The subject emerging in this algorithmic calculus does not correspond to the stable statistical classifiers or fixed figures of alterity and enmity that have long been targets of critique, but is rendered visible through shifting correlations detected in data—pulsing patterns temporarily assembled in actionable form (Steyerl 2019). Emerging outside and against fixed (legal or political) categories and criteria, these composite clusters could be perceived precisely as an expression of the excess referred to by Mezzadra and Neilson: a surplus, or virtuality, captured and coded through new sensory technologies.Footnote 25 This disjunction has important consequences for the application of legal standards on equality and non-discrimination (Van Den Meerssche 2022). As Amoore (2021, p. 7) argues: the ‘logic of the cluster, thus, affords the state a means of pursuing racist borderwork while circumventing the social and juridical rights’ associated with ‘twentieth century transcripts of characteristics and categories’. The composition of the cluster not only entails new technologies of racialization (as we will return to below),Footnote 26 but also seems to evade and exceed structural categories on which legal frameworks are grafted.

The cluster is an inferential, correlated subject. The aim of the envisaged projects described above is not to construct causal knowledge about specific phenomena but to reveal and infer new patterns, probabilities and propensities. In this sense, the composition of the cluster is not a representation (of pre-existing entities) or evaluation (of past actions) but a speculation or projection (of potential future behaviors). The future, in other words, is seized in advance through the creation of an ‘augmented reality’—or what Rouvroy and Berns (2013, p. 182) describe as a simultaneous actualization of ‘a memory of the future’ and ‘a systematized serendipity’—an ‘actualization of the virtual’.Footnote 27 This entails a troubling temporal bind where the subject’s agency and actions are conditioned by algorithmic anticipation (a concern which figures at the heart of the liberal responses canvassed in the next section). At the same time, this also entails a reconfiguration of how state agencies are organized and public decisions are made, explained or reviewed—a reconfiguration that is often seen to erode possibilities of human judgment and the standards of accountability tied to those.Footnote 28 Yet, it is important to note that the changes we describe do not displace prior modes of governance (and the forms of subjectivation these imply). Just as tools of algorithmic inference tend to be nested into rule-based systems, correlational patterns are routinely discarded when conflicting with pre-existing norms or segue into more formal causal codes. Logics and temporalities of rule co-exist and conflict. In light of the significant socio-legal challenges that the study of such composite system poses, our aim here was modest: we sought only to signal, through the example of the (re)making of the subject at the ‘virtual border’, some salient changes in the modes of subject-making that are enacted algorithmically. We located these changes in the fluid and relational, emergent and composite, inferential and correlational nature of the cluster—a fragmented ‘subject’ made up of dispersed data points in a vector space, whose behaviors are anticipated and predicted. The ideal of an identifiable and individuated ‘human being’ pertaining to (equally identifiable and individuated) social groups or collectives, whose actions can be traced and assessed, is here displaced by an open-ended data-based risk profile—a speculative category to which beings are associated for reasons often unknown to them but also to the agents overseeing control of the ‘virtual border’. The emergence of algorithmic subjects, we argue in the next section, has led to a novel register of critique, driven by anxieties in seeing the liberal subject being threatened.

3 Algorithmic subjects and liberal anxieties

The preceding section explored how practices of algorithmic governance both enact and foreclose particular modes of subject-making. There is nothing new, of course, in arguing that regimes of power produce the subjects on and through which they act.Footnote 29 Yet, we observed how the combined use of big data and AI entails a mode of subject-making that differs from prior forms of biopolitical or neoliberal governance. We traced these changes through a brief account of current border control strategies and technologies, which are oriented towards the composition of ‘clusters’ as objects of classification and intervention. Our analysis of the ‘cluster’ as a specific subject of contemporary ‘governance by data’ also considered how the conditions of possibility for legal protection, collective action or subversive forms of subjectivation implicit in prior regimes of rule now risk being eroded. Practices of mutual recognition—the social articulation of a collective ‘we’—from which political alliances and values emerge are displaced by speculative ‘clusters’ with which no subjective identification is possible. Prospects of commonality and publicness—as well as certain specific legal safeguards—increasingly appear as affordances of technologies and infrastructures that are being repurposed or replaced.Footnote 30 In this context, concerns are raised on how algorithmic governance threatens the treasured ‘rule of law’.

In this section, we briefly canvass three recurrent concerns and associated normative interventions: the observed assault on the autonomous human subject, the problem of publicness and inclusion, and the prevalence of algorithmic biases and associated inequalities. Combined, these different positions display a problematization of digital technologies from the perspective of particular liberal ideals of the subject and the public—ideals of human autonomy, inclusion, and equality. While we consider these perspectives as crucial in capturing what is at stake in the rise of ‘governance by data’, we also signal important limitations of this liberal lexicon, on which we expand in the final section.

In the encounter with practices of algorithmic governance, perhaps the primary concern of critical scholars is with the erosion of the autonomous, rational and self-reflexive (legal and political) subject, which risks being reconfigured and displaced by the disembodied proxy of the cluster. As we argued, indeed, the composition of clusters alters how individuals are conceived, recognized, and interpellated as subjects.Footnote 31 Rouvroy and Berns (2013, pp. 173–174) lament, in this light, that both governmental and corporate entities are not interested in the individuated person as such, but merely in assessing the traces of its mirrored ‘data double’. These phenomenologically void projections, they argue, are disconnected from the objects of their abstraction in troubling and debilitating ways. This mode of critique therefore seeks to safeguard the ‘person’ against the projections of the ‘profile’—to protect the agential space of the former from the incursions by the later. Displacing the ‘person’ by the ‘profile’ cuts short processes of self-recognition and self-affirmation. Rouvroy contends: ‘being profiled in this or that way affects the opportunities available to us and the space of possibilities that defines us: not only what we have done or are doing, but what we could have done or could do in the future’ (Rouvroy et al. 2022, p. 130). This is seen to threaten the prototypical legal or political subject: ‘algorithmic governmentality deflects concerns or attention away from … previously privileged perspectives of causality and intentional agency or individual and collective “authority” (that is … the capability to “author” one’s actions, to have the “authority” to give account of one’s actions meanings)’ (Rouvroy 2012, p. 7).Footnote 32 For Rouvroy (2012, p. 12), ‘governance by data’ thereby ‘bypasses individual consciousness and rationality (not only because operations of data-mining are invisible, but also because its results are unintelligible for the instruments of modern rationality)’. This, she claims, undermines the self-reflexivity, critique and deliberation essential for a subject to politically identify as such: it erodes the ‘inactual, potential dimensions of human existence’ where ‘processes of subjectivation and individuation’ unfold (Rouvroy 2012, pp. 12–13). Or, framed most directly: ‘[t]here is no longer any subject’—in the recursive (re)composition of ‘infra-individual data into supra-individual profiles’, the ‘notion of subject is itself being completely eliminated … [Y]ou no longer ever appear’ (Rouvroy and Stiegler 2016, p. 12). We observe a similar line of argumentation with Hildebrandt (2019, p. 105), who diagnoses a ‘shrinkage of the inner self’ as a result of our ‘overdependence on computational decision-systems’. In the example of the ‘virtual border’, we noted how decisions are based upon the production of data-based risk ‘profiles’, which neither those being ‘profiled’ nor those processing risk indicators are capable of representing, contesting or reflecting upon. As such, individuals being profiled cannot narrate, rationalize or justify their own actions, since these actions are not their own (yet). Critical self-reflexivity—‘the capability to develop a mind of one’s own’ (Hildebrandt 2019, p. 106)—is perceived in these perspectives as intimately intertwined with the modern rule of law and the democratic liberal order. In this light, the performance of pre-emptive processes of algorithmic governance ‘corrupts’ and ‘reduces’ our ‘autonomy’ and ‘human agency’, which ‘begins where we engage in critical reflection’ (Hildebrandt 2016a, b, c, p. 7; Hildebrandt 2020, p. 254). Hildebrandt (2019, p. 106), therefore, ‘warn[s] against the attempt to subvert our capability to reflect on how we navigate our world’, and calls upon lawyers to invent new ways to ‘accommodate human action’—to ‘safeguard the fundamental uncertainty and indeterminacy it assumes, and to protect the pinch of freedom and autonomy that defines us’ (Hildebrandt 2016c, p. 30). While the modern rule of law afforded such expressions of ‘free will and deliberation’, the challenge now, in tune with Arendt’s critique on behavioralist modes of governance, is ‘to protect the incomputable nature of the human self, its foundational indeterminacy and the natality it expresses’ (Hildebrandt 2016c, p. 29; Hildebrandt 2019, p. 105).

While these few paragraphs cannot do justice to the richness and complexity of the claims developed by Rouvroy and Hildebrandt,Footnote 33 we can recognize a shared mode of critique that aims to protect forms of human subjectivity and autonomy against the dissolution of the self, inflicted by nascent practices of algorithmic governance. While this mode of critique convincingly canvasses and counters many of the problems sketched in the previous section, we are wary of how it continues to figure around ideals of the human as a rational, free and sovereign subject—around notions of the autonomous and reflective ‘inner self’. In doing so, this mode of critique reaffirms political and ontological boundaries between the human and nonhuman, between mind and matter.Footnote 34 This, we argue, presents a number of risks and limitations. It is, first of all, crucial to account for how abstractions of autonomy are fraught with histories of assimilation and annihilation where boundaries of human subjectivity are drawn.Footnote 35 For Wynter (2003, pp. 260 and 264), in this sense, the emergence of the sovereign and autonomous human subject was possible ‘only on the basis of the dynamics of a colonizer/colonized relation that the West was to discursively constitute and empirically institutionalize’. Who is the ‘us’ which Hildebrandt sees as ‘defined’ by ‘freedom and autonomy’? How are such affordances of autonomy attributed? Which lineages and legacies do they import, and which forms of political action do they enable and sustain in the encounter with algorithmic governance? What becomes visible and possible—as we explore in the next section—if we desediment this rational sovereign subject as site of our normative aspirations?

Secondly, we question to what extent attempts to protect and restore the primacy of human agency—in the encounter with what Hildebrandt describes as ‘mindless’ ‘data-driven agency’—is materially tenable and normatively desirable.Footnote 36 Which political possibilities do we invite and foreclose by trying to reinstate what Rouvroy (2012, p. 7) cherishes as ‘privileged perspectives of causality and intentional agency’? Which registers of critique or contestation open up if, instead, we consider the human and nonhuman, mind and matter, to be inherently entangled and agential forms not to pre-exist but only to result from such relational compositions? (Barad 2007, p. 141).Footnote 37 Perhaps, this might allow us to move beyond sterile normative tropes—such as the ‘human in the loop’, which, as Amoore (2020, pp 58–66)Footnote 38 notes, is an ‘impossible figure’—and allow us to craft more powerful and imaginative responses to the dissolution and displacement of the subject in practices of algorithmic clustering described above. While, as Hohmann (2021, p. 595, 598) observes, many debates on ‘responsibility, accountability, and legality’ in the context of algorithmic governance, ‘center back in on the human as a stable and accepted point of reference’, the subversive forms of subjectivation that we seek to foreground require us precisely to ‘unseat the sovereign, rational human being as the location of agency and subjectivity’.Footnote 39 This is not a renunciation of agential possibilities but a move beyond the rationalist, representationalist assumptions underlying invocations of autonomous human agency set out above—an opening to more-than-human and distributed forms of political agency (Petersmann 2021).

In addition to these concerns about human agency and autonomy, (international) legal scholars have also been preoccupied with problems posed by algorithmic governance for prospects of publicness and democratic inclusion. Kingsbury and Maisley (2021, pp 354, 365), in this vein, convincingly point to the possibility of a disjunction between ‘infrastructural publics’ and ‘legal publics’—between those collectively affected and mediated by a particular material environment and those tied together as subjects under specific (and potentially overlapping) legal regimes.Footnote 40 Interestingly, they observe how infrastructures—such as the emerging ‘virtual borders’ briefly described above—produce heterogenous, unstable and sometimes subliminal publics (Kingsbury and Maisley 2021, pp. 359–360)Footnote 41 which have varying degrees of affinity, cohesion and mobilization.Footnote 42 ‘Infrastructures’, they observe, may also ‘work to prevent the emergence of certain publics’, which they exemplify with the ‘politics of nonpublics’ enacted by the material making of South Africa under apartheid (Kingsbury and Maisley 2021, p. 361).Footnote 43 The analysis by Kingsbury and Maisley provides a powerful framework to problematize the composition of the cluster as a subject of algorithmic governance. We noted above how the digital infrastructures of bordering only produce temporary and fleeting bonds of association which preclude mutual recognition and collective action—in line with the ‘nonpublics’ that Kingsbury and Maisley point to and problematize.Footnote 44 ‘Non publics’ can meaningfully be defined (and distinguished from the forms of relationality that shape ‘publics’), in this light, as forms of association without the affordance for mutual recognition or collective action. The detachment of these ‘nonpublics’ from categories of legal protection, recognition, and representation displays the urgency and importance of understanding the frictions and disconnections between infrastructural and legal forms of sociality (Kingsbury and Maisley 2021, pp. 365–366).Footnote 45 In their attempt to realign infrastructural with legal ‘publics’ (under normative conditions of ‘publicness’), Kingsbury and Maisley (2021, p. 367) assert that the contextual, contingent bonds of belonging together infrastructurally have to be translated in legal inscriptions of subjectivity and collectivity. Law is a ‘vessel for normativity’ that leads to the ‘emergence of a new encompassing … public, a “public of publics”’—a mode of inclusion that gives a coherent constitutional form to scattered subjects only tied together by fluid, ephemeral and disempowering material conditions (Kingsbury and Maisley 2021; Kingsbury and Donaldson 2011). In reaction to the degradation of sociality that results from data-driven infrastructures of rule, this is a plea for a re-alignment and re-attachment of the fragmented subject into a ‘representative system’ that safeguards inclusion, voice and visibility (Cf. Habermas 1998). The ‘phantom public’ that dwells within the algorithmic calculus has to be given legal form (Lippmann 1993, referred to by Kingsbury and Maisley).

In line with Kingsbury and Maisley, Bechmann (2019) (studying social media platforms) is also concerned with how the processing of ‘data as humans’ threatens ‘democratic values of representation (including participation), accountability, and equality’. As with Kingsbury and Maisley, the noted problem is one of ‘underrepresentation’—the absence of (individual and collective) recognition with(in) the data and lack of democratic agency of those sensed and sorted algorithmically (Bechmann 2019, pp. 75, 87).Footnote 46 In this vein, Bechmann (2019, p. 75) demands ‘all humans’ to be ‘properly and equally represented’. The key concern is the inclusion in the ‘public’ sphere—the democratic domain—of those who are digitally divided or disregarded (Bechmann 2019, p. 78). In the light of these accounts, the cluster—this ephemeral bond of association without representational equivalent or pretence to publicness—poses a particular and urgent democratic problem.

While we consider these accounts to be promising and powerful problematizations of the algorithmic practices described above, we see value in the warning by Mezzadra and Neilson (2013, p. 274) against ideals that ‘search for legitimacy in the languages of inclusion and exclusion, the jargon of part and whole, or the horizon of a “pure politics” that plays itself out in the demos or the state’. The ‘becoming public’ that is called for by Kingsbury and Maisley—with reference to the liberal political theory of Arendt, Habermas and Fraser—also entails an enrollment in the systems of subjectivation that the language of inclusion both expresses and demands. Yet, as Tendayi Achiume (2022a) has powerfully argued, our critical focus should be on ‘marginalization upon inclusion’—on how inclusiveness can operate as yet another technology of domination. As elaborated in the next section, we are inspired here by Moten (Moten and Harney 2021) who observes that this ‘public sphere’—the domain where ‘putatively individual subjects act and speak in public in so-called collectivities or coalitions’—not only ‘exclude[s] [black folks] from modalities of citizenship, personhood, subjecthood’, but is ‘predicated on that exclusion, which is to say: predicated on the regulation and exclusion of that insurgency, which “blackness” instantiates’—an insurgency that ‘manifests itself as the refusal of the regulative force that has to be exerted in order for subjects to come into their own as subjects’.Footnote 47 In the following section, we therefore reflect on modes of sociality that work against this ‘regulative force’—a defiant fugitivity that operates against the (en)closure of inclusion and dwells within the indeterminacy of the ‘algorithmic aperture’.Footnote 48

Finally, a recurrent preoccupation among critical scholars concerns the biases that plague algorithmic governance. Conceptually close to aspirations for autonomy and inclusion, the focus here is on the values of equality. While gender, class, and multiple other biases are algorithmically activated and amplified, our brief analysis here focuses on racial biases as only one instantiation. Machine biases and systemic racism are deeply entangled. As Amaro (2022) argues, the ‘black technical object’—used to refer to the psychic fragmentation of non-White subjectivities who are the victims of intersectional digital biases—is ‘always-already pre-conditioned by an affective prelogic of race’ (Amaro 2020b, a, p. 304). This issue is often framed (and, we argue, reduced) by reference to the bias of training data or individual coders. To counter racial biases, computer scientists such as Buolamwini work to widen the scope of machine perception by gathering, including and coding more data of non-White individuals, for machines to learn to detect behavioral patterns beyond the ideal type of the liberal (White) human subject (Buolamwini 2020).Footnote 49 This attempt speaks directly to the erasure of certain subjects from the ‘public sphere’ hinted at above, and the resulting calls for a ‘greater’ inclusion and ‘better’ representation of racialized and marginalized groups. For Buolamwini, indeed, the issue is a representational one: ‘a lack of diversity in the [data] training set [that] leads to an inability to … characterize faces that do not fit the normal face derived from the training set’ (Buolamwini 2020). The situated gaze of the (potentially) biased data or coder needs to be traded for a ‘universal gaze’. This is a familiar concern for (international) legal scholars, who point to the ‘risk that biased, discriminatory, or otherwise unjustified outputs may result if data sets lack integrity, accuracy, and reliability’ (Endicott and Yeung 2021) and call for algorithms to be made ‘transparent’ and ‘inclusive’ (Benvenisti 2018). The problem of inequality, in this framing, is one of statistical error and polluted data—a deviation from the prevailing standards of equality and clean computational decision-making to be found and fixed. The normative objective of such legal interventions—equality through objective digital representation and neutral calculation—is intimately intertwined with liberal values of autonomy and inclusion.

There is no doubt that, faced with decision-making systems such as those used at the ‘virtual border’, these interventions are crucial in highlighting the ‘immediacy of racism and racialization’ (Amaro 2020b, a, p. 312). Yet, the ‘universal gaze’ proposed by Buolamwini (and implicit in the legal demands described above) can be seen as its own regime of perception and subjection. This is registered, in one sense, in the need for increased data extraction to ‘correct’ the skewed algorithm. Yet, as Amaro (2021, p. 155–156) argues, what it ‘takes for granted is the simultaneous effect of making visible those whose aspirations are to remain unseen by power’. As we elaborate in the next section, those whose lives are threatened by inclusion within the system that violently excluded them in the first place, do not necessarily demand to be recognized as equal ‘subjects’. By ‘reversing the coded gaze’—as Buolamwini does—and including ‘diverse’ faces into facial recognition training sets, the inclusivity seeks to make transparent an opacity that may serve as ‘the only container for safety’ for those populations who are now again rendered visible (in line with Glissant’s clamor for a right to opacity and Moten’s articulation of the ‘excess of living that itself becomes the production of life’) (Amaro 2021). The ‘universal computational gaze’, in this sense, limits the ‘self-determination of those that have little or no desire for inclusion in machine perception’ (Amaro 2020b, p. 307).

Such attempts to render visible or transparent other modes of being through a ‘universal gaze’ capture the ‘excess’ of Black life from within a confined computational milieu, which already ‘positioned the white object as the prototypical characteristic’ (Amaro 2020b, p. 307). ‘Whiteness’, Phan and Wark (2023) argue, ‘persists as a grounding norm’. Attempts to clean or de-bias this computational milieu—by safeguarding against the racialized development of risk profiles at the ‘virtual border’, for example—‘catalyzes disruption only on the level of superficiality’: ‘the white object remains whole, while the object of difference is seen as alienated, fragmented, and lacking in comparison’ (Amaro 2020b, p. 307). In this sense, Amaro (2021, p. 155) observed, calling out algorithmic biases is ‘already predicated on the reduction of chance and contingency: namely, to regress, to clean, to normalize the unexpected, and therefore normalize and exclude the opportunities that live outside of the lines of political engagement’. To fix algorithmic ‘biases’, in other words, erases the anomalies that exceed ‘normal’ behaviors moulded against the backdrop of an idealized (White) human figure (Phan and Wark 2021). The insistence on de-biasing, then, might serve, as Amaro (2020b, p. 302) argues, to further ‘reduc[e] the operation of individuation, and primarily the differences amongst the living, to no more than an assemblage of contradictions that are negated and subsumed into a higher, more homogenous, unity of existence’. The classificatory operation of risk analysis at the ‘virtual border’, in this sense, enacts its racializing logic through the construction of a coherent, homogenous computational milieu where, as Amaro (2022, p. 56) notes, the ‘operation of individuation is … relegated to a series of representations among a falsely unified species’. The racializing working of machine learning can thereby be seen as ‘the power that responds’, in Moten’s terms, to the ‘very emanation and the condition of possibility of becoming-common … of living in the world of fugitive and common things’ (Moten 2018, p. 24).Footnote 50 The normativity of Whiteness, in short, is reflected not in statistical bias or error but in the representational assertion of a ‘universal computational gaze’ which can be perceived as a ‘state of homogeneity’ that calls for ‘accurate’ and ‘inclusive’ data sets precisely tend to reinforce (Amaro 2022, p. 57). Whiteness, in this vein, is not (only) an ontological, biological, or normative attribute but a process of homogenization. Calls from legal scholars and critical AI experts urging to render algorithms less biased, more inclusive and equal, risks disavowing this important point, by reinscribing or recommitting themselves to a pre-existing substance of racial difference that underlies both the ‘overriding logic of correlation and hierarchy’ in machine learning processes and the attempts to retrace the inequalities enacted through algorithmic (risk) assessments to pre-existing racial categories (Amaro 2022, p. 61).Footnote 51 The problematic of bias, seen this way, is a trap: an erasure rather than an affirmation of difference, an enclosure rather than an opening.

Rather than folding ‘seamlessly into the desire for representation’ (Amaro 2020b, p. 304), it is perhaps on the terrain of error, dissonance, illegibility, and invisibility that new openings can be found: opportunities to ‘shift the pathological perspective from one of entropy and lack [or the loss of computational coherence as such] to a more affirmative process of psychic generation’ (Amaro 2020b, p. 311). In this sense, the misrecognition of machine perception might entail possibilities for Black life ‘outside of phenotypical calculation, prototypical correlation, and the generalization of category’ (Amaro 2020b, p. 307). What we need, Amaro and Khan (2020) note, is ‘a purposeful misrecognition of the dominant ontogenetic perspective of racial individuation’, to ‘bring forth new ideas of what it means to be Black in a world regulated by the substance of race’. This entails a ‘fundamental revaluation of the values that form individual and collective perception’, for otherwise to ‘make black technical objects [legible by and] compatible to computer vision algorithms’ risks the further reduction of the ‘lived potentiality of black individuals’ (Amaro 2020b, p. 304). How, then, can we contest algorithmic inequality without reifying the normalizing and reductive ideals of representational equivalence? Which spaces could be safeguarded for the ‘black technical object’ that ‘does not “want to be correct” or “corrected”’? (Amaro 2020b, p. 311). How can we interrupt or fracture the computational milieu rather than reaffirming presumptions of coherence and detectability?

Against this backdrop, and to make sense of modes of be(com)ing in algorithmic times, we draw on works from critical Black studies that engage with the possibility for social life beyond racial re-inscriptions that are enacted by and through the (return towards) representation of the subject and its collective. In contrast, we want to think of a mode of individual and collective be(com)ing beyond the liberal notion of human autonomy, beyond the logic of part and whole, beyond the procedural concern with bias. As Amaro and Khan claim, ‘[w]e must bring to light a notion of political subjectivity that does not organize at the threshold of existing perceptions of difference, but instead releases the energy from this interaction to form a potentially new individual and collective being’ (Amaro and Khan 2020). As one way of releasing this energy, we turn to notions of Black sociality.

4 Undoing the anti-blackness of be(com)ing in algorithmic times

In this section, we turn to works from critical Black studies to grapple with the questions set out above. We are aware that such works are not necessarily and definitely not primarily addressed to us, given our positionality and lived experience in this world.Footnote 52 There is a risk here of appropriating experiences of suffering that are not ours. It is important to stress, therefore, that we are neither speaking for or on behalf of ‘black folks’, nor trying to capitalize on or co-opt critiques from Black radical thinkers, which we see as also formulated against ‘us’.Footnote 53 Rather, we take on their call to undo the anti-Black premises of the world and its subjects of which we form part. It is, as such, the anti-Blackness of the modernist world in which we live that we are after, rather than the lived experience of Blackness that evades us. We see this as a particularly urgent project in light of how key tenets of the liberal lexicon—such as autonomy, inclusion and equality—are being recovered and reified in facing issues of algorithmic governance.

The reader might be questioning our turn to critical Black studies at this stage of the argument. In the preceding sections, we argued, first, that AI-based decisions—as exemplified through emerging digital infrastructures of the ‘virtual border’—are giving rise to new forms of subject-making through ‘clusters’ of data, which displace the liberal autonomous, rational and self-possessed individual subject and enact, instead, relational, emergent and inferential algorithmic ‘subjects’. Second, we argued that in response to the pressure put on the autonomous, rational and self-possessed individual subject, legal scholars are advocating for distinct ways of addressing these points of pressure. This is evidenced by calls for a (re)turn or (re)attachment to values of autonomy, inclusion and equality in relation to algorithmic infrastructures. Yet, we also argued that the liberal perspectives and anxieties in these interventions risk reinscribing the modernist world and its subject. It is in the expressed need for undoing, for refusing, for de-worlding the anti-Black foundations that underpin this world and its subject, that works from critical Black studies appear as uniquely insightful to us (despite us not being the primary audience of such works).

This is so for at least two reasons. First, critical Black studies thematized a (Black) sociality that operates beyond or outside of the liberal category of the subject. This matters for our claim, since it is precisely the individual subject as such that ‘algorithmic governmentality’ is displacing by clusters of dispersed data points. Works from critical Black studies can, therefore, help thinking possibilities of being and becoming-in-common in algorithmic times that counter the anti-Black foundations of liberal notions of inclusion and the public sphere, and refuse ideals of the sovereign subject. This opens up to potentialities of collective living—of emergent forms of sociality—that we see threatened by both the practices of algorithmic governmentality and the reification of the sovereign subject in current regulatory responses.Footnote 54 Second, we retrieve the power of refusal and resistance in Black sociality to counter and escape the ‘universal gaze’ of computational perception and subjectivation.Footnote 55

As alluded to in the previous section, critical AI scholars and activists working to counter the racial biases of algorithmic systems have called for more inclusive data to better reflect social diversity. What is at stake in those critiques are the biases of coders and computer scientists who, in attempting to ‘objectively’ program the ‘reality’ of the ‘world’, reproduce the sedimented racial, gender and class-based inequalities and discriminations that constitute it. The ‘world’, here, is composed by the ideal-type figure of the liberal human subject—as a self-possessed, self-reflexive and autonomous individual—that acts as the ‘norm’. A normative subjectivity, in other words, is reserved to and moulded by the lived experience of liberal White human subjects. Countering racial biases in algorithms would then expand the category of the liberal human subject by recognizing and including more diverse modes of being into it, thereby correcting this category by ‘cleaning’ it from its biases. But what if, instead of perfecting the category of the liberal human subject, we would let go of it in order not to reproduce the violence it enacts by transcendentalizing the figure of the self-possessed, self-reflexive and autonomous subject?

Indeed, this figure of the subject originated by being reserved to White normativities, against nonhuman animals and chattel slaves that constituted objects of property and labor.Footnote 56 As such, the White subject cannot be disentangled from the Black object. The hegemonic figure of the White human being, in other words, overdetermines the sense of be(com)ing a subject. As Hartman (1997, p. 6) puts it, the experience of historical violence against (non)human objects is upheld each time the category of the liberal human subject is re-enacted, since this category was originally constituted by and through the exclusion of any-thing nonhuman, especially the enslaved as objectified and dispossessed property.Footnote 57 Against this backdrop, the ‘vision of equality forged in the law’—and more precisely in the White normativities forming the law’s subject— ‘naturalized racial subordination while attempting to prevent discrimination based on race or former condition of servitude’ (Hartman 1997, p. 9). Calls to counter racial biases by expanding or excluding specific data inputs of machine learning risks therefore to naturalize the racial subordination that informs the category of the human as White or rather as an anti-Black subject. As Hartman notes, invocations of non-discrimination of subjects risk disavowing the ‘racial domination and liberal narratives of individuality [that are] utterly enmeshed in … emancipatory discourses of rights, liberty, and equality’ (Hartman 1997, p. 116)Footnote 58 —discourses that are invoked against (racial) biases in AI today. Against this reproduction of violence, critical Black scholars attend to the generative desedimentation of the liberal human subject, instead of working at expanding or reworking it. Concretely, this implies suspending invocations of liberal values such as autonomy, freedom and equality to attend to and think creatively about what could happen if these values were refused and rejected.

This matters when considering data-based subject-formations. The critiques of governance by data traced in the previous section—on the observed assault on the autonomous human subject, the problem of publicness and inclusion, and the prevalence of algorithmic biases and associated inequalities—call for a re-inscription of modernist ideals of the liberal subject and its public. A return to the subject as such, however, perpetuates and prolongs the anti-Black foundations that underpin this category. The cluster—as a correlational, emerging, and composite mode of be(com)ing subject—attempts to govern in the present the excess, potentialities and speculative possibilities that are located in uncertain futures. By taking our cue from Du Bois—who accomplished to produce and dissolve, at once, the notion of ‘Whiteness’ by way of an account of the position of ‘Blackness’ in America, and insisted on an excessiveness of being that refuses to abide by simplistic logics and oppositional categories of racial divisions (Chandler 2013, p. 127)Footnote 59—we want to ask what forms of be(com)ing in algorithmic times could be envisaged if, instead of retrieving the liberal subject and its collective, we linger with the excess, the opaque and the de-individuation performed by algorithms?

First, we note how the excess of being that algorithmic logics attempt to tame and to govern—or what happens in excess of, in difference to and beyond the radar of AI-based analytics—seemingly echoes the lived experience of Black folks in anti-Black worlds. As Amaro (2019) argues, ‘black being as such actualizes as an experience that is lived from both within and in excess of artificial modes of perception and the fictive imaginary of race’. Blackness exceeds the possibility of being human qua (White) subjects—an impossibility also performed by way of data profiling and the displacement of the autonomous and self-possessed subject in favor of a cluster of data points. A political problematization of algorithmic governance through the notion of racial biases remains stuck with binary racial stereotypes—or a fixed ontology of race—that misdiagnoses the problem at hand by offering recognition and inclusion as a ‘solution’, instead of attending to and leaving space for the excess of possibilities of actions that can materialize in improvization. These possibilities evade the relational networks that are registered by algorithms to profile and model the behaviors they track.

Second, we note something emancipatory in this excess that evades algorithmic individuation. For Du Bois, the simultaneous production and dissolution of the subject ‘marked out the very space and possibility of desire and that which is yet to come’ (Chandler 2013, p. 120). The question of desire is important, here, in relation to algorithmic governmentality. As Rouvroy contends in this context, the automatic detection of intention taps into pre-conscious impulses and undermines the very possibility of desire as the capacity not to do everything one is capable of.Footnote 60 By foretelling desire, algorithms produce ‘acting outs’ that short-circuit one’s capacities of desiring the not yet—or with Du Bois the ‘yet to come’. For Du Bois, the ‘double consciousness’ of African-Americans opened up possibilities that were not given in advance, to let transpire a sense of ‘self-as-becoming-other than the given’ (Chandler 2013, p. 153)—a given premised on pre-conceived categorical distinctions between (white) subjects and (black) objects. Could algorithms be configured without enclosing forms of pre-figured subjectivation? Can modes of being and becoming in algorithmic times remain open to the not yet given? How could such modes of be(com)ing refuse and escape the pre-emptive logics of algorithmic anticipation? It is in this evasion, this escape, this fugitivity from the given that insights of Black sociality appear key.

These interrogations find inspiration in Glissant’s (1997) articulation of a ‘right to opacity’—a right not be rendered transparent, not to be measured against, and not to be related to already given political and legal norms. A ‘right to opacity’ demands instead to remain open towards ‘what cannot be reduced, which is the most perennial guarantee of participation and confluence’ (Glissant 1997, pp. 189-190). In tune with Glissant’s Poetics of Relation, Mezzadra and Neilson (2013, pp. 274–275) perceive the political as a ‘social practice of translation [that] creates a collective subject that must continually keep open, open in translation, and reopen the processes of its own constitution’.Footnote 61 The opacity of open-ended potentialities collides with the total transparency and enclosure of algorithmic pre-emption of possible futures brought into the present. How, then, can one escape the logics of algorithms, which fix, foretell, and reduce the possibility of be(com)ing in algorithmic times?

As Harney and Moten also ask, could a sociality be envisaged, then, that enables a ‘fugitivity from the universal history that would fix the fate of the self’—a fate of the self-fixed by algorithmic predictions?Footnote 62 In their latest book, All Incomplete, Harney and Moten (2013, p. 57) took specific issue with the rise of algorithms. The ‘logistics of algorithmic composition, and the rhythm of logical capitalism’ pulse into everything, becoming the beat that rhythms life everywhere, all the time. ‘Algo-rhythmic’ logistical capitalism operates through a totalizing movement—a movement of total access to and putative transparency of the subject (its past, present, and future actions).Footnote 63 As exemplified by the ‘virtual border’, the algorithmic assessment and projection of ‘risk’ is aimed at rendering future possibilities actionable in the present. With Harney and Moten, we then wonder: ‘[c]an we dodge and blur an algorithmic syntax that straightjackets and atomizes us into total access, so we can get back to rebuilding our atrophied habits of assembly’? (Harney and Moten 2021, p. 171). What habits of assembly, of collective thinking, of being and acting can exist, without falling back on liberal invocations of the subject and its public?

What we learn from critical Black studies is that be(com)ing in algorithmic times without the subject bears potential to live in the break of modernist, logistical capitalist onto-epistemologies, where life is not reductively foretold in advance. Taking our cue from critical Black studies that problematize liberal reactions against racial biases and suggest distinct modes of be(com)ing, what transpires is the need to thrive for a living that attends to the potentialities, excess and possibilities that a computational life rhythmed by the pulses of algorithms is trying to control and govern—to dwell in the algorithmic aperture. In Amaro’s words: ‘[w]hat emerges is an optimistic view that actually says there is a term of resistance, there is a term of being that lives outside of governance’ (Hui and Amaro 2021, p. 60).

Why, then, would one call for an inclusion of Black beings into the category of the subject, from which they have always been excluded in the first place? As Harney and Moten (2013, p. 47) put it: ‘[W]e have to love our refusal of what has been refused (to us)’. This is a love for ‘subjectlessness’,Footnote 64 the purpose of which is not to correct the ‘biased’ constitution of the liberal subject but to dismantle the figure of the given subject as such. Against the logic of predictive analytics that anticipates and forecloses the be(com)ing of algorithmic subjects and their admission into existence, what if the ultimate aim, as Amaro (2020a) has argued, would be ‘to shift the racialized individual away from the stereoscopic image of categorical difference and move towards a non-representational mode of self-determinism from within or outside of the institutional structure’? It is in this possible mode of self-determinism that we see hope for the refusal of the perennial repetition of (algorithmic) governmentality and its onto-epistemology of race. Perhaps be(com)ing in algorithmic times by refusing, resisting and overcoming the ‘dominant ontogenetic perspective of racial individuation’ (Amaro and Khan 2020) bares the potential to reckon with the suffering of ‘those whose presence never counted, whose absence was never accounted for, their existence unable to be recounted—that which does not even leave a trace’ (Culp 2021, p. 113).

Yet how, concretely, could the opacity of be(com)ing be practiced and kept open against algorithmic foreclosures? Our intention here is not to offer an answer to this question, but to introduce a distinct way of making sense of the displacement of the liberal subject by the cluster. While most of critical scholarship that took issue with this matter did so by recovering liberal values of autonomy, inclusion and equality, our objective was to open up a different way of thinking the problem at stake—a different way to problematize the anti-Blackness of be(com)ing subjects in algorithmic times.

5 Conclusion

The article mapped out and explored how algorithmic technologies are reconfiguring modes of subjectivation today. By taking the example of the ‘virtual border’, we started—in the first part of the article—by demonstrating how once stable categories of both the individual and the collective(s) to which it belongs, are displaced by correlational and volatile forms of ‘subject’-making through the productions of fluid, emergent, inferential clusters—pulsing patterns and propensities distilled from distributed data points. This algorithmic mode of subjectivation, we argued, inhibits forms of mutual recognition, or being-in-common, and thereby erodes conditions of collective action. In response to this dissolution of the subject and the degradation of sociality it entails, as we analyzed in the second part of the article, legal scholars have challenged the threats posed by practices of ‘governance by data’ to liberal values of human autonomy, democratic inclusion and equality. While we see value in these approaches in countering the risk of an ‘algorithmic governmentality’ based on forms of projection and pre-emption, and in highlighting the ‘immediacy of racism and racialization’ in its operations (Amaro 2020a, p. 307) we pointed to some limits of this ‘legal imagination’ in counteracting specific forms of algorithmic violence, and argued that its projections of liberal subjectivity and inclusion are predicated on the erasure of difference and the disciplining of Black social life and its disruptive modes of being-in-common. In the third part of the article, we drew on contemporary strands of critical Black studies in speculating about what a ‘legal imagination’ could look like—in line with the theme of this special issue—that seeks to undo, refuse, and de-world the anti-Blackness of the computational milieu and lingers with the possibility to ‘catalyze future affirmative iterations of the self’ in algorithmic times (Amaro 2022, p. 62).