1 Introduction

When we think of the term ‘violence,’ it would typically conjecture forms of incapacitation, or deprivation of bodily, health and related capacities in ways that are intended by the perpetrator [1]. Observability, perceptibility and intentionality are elements that commonly feature on accounts of violence. ‘Slow violence’ is however an alternative framing, presenting violence as being attritional in nature, namely one that takes place gradually and slowly. The term ‘slow violence’ used in this paper draws from the work of Rob Nixon, who argued that environmental harms that disproportionately impact the poor is a form of violence that:

(O)ccurs gradually and out of sight,… dispersed across time and space, an attritional violence that is typically not viewed as violence at all [2, p. 2].

Slow violence of this nature is the opposite of the spectacular imagery of environmental damage associated with oil spills or breaking ice-caps and forest fires that typically accompany clarion calls to take climate change seriously. Applying the framing of slow violence, this article argues that the harms introduced by artificial intelligence (AI) to the human rights, in addition to raising familiar discrete human rights issues such as the right to privacy, non-discrimination and freedom of expression, bring about deeper harms to the human rights framework. These are harms of ‘slow violence’ that can challenge both the foundational assumptions of specific rights as well as the normative justifications of human rights. To be sure, while the term violence itself is typically associated with somatic incapacitation, Galtung himself critiqued the narrowness of this definition and argued that violence should be understood as ‘the cause of the difference between the potential and the actual, between what could have been and what is.’ [1, p. 168] However, the article argues that the slow violence framing is nonetheless critical in understanding the unique challenges posed by AI towards the human rights framework – namely its slow, gradual and grinding effects that quietly hollows the framework from within. Rather than seeing human rights as a spectacular failure in countering AI harms, the framework is rather at a risk of going out with a whimper, not a bang.

To set the stage, the paper defines artificial intelligence in accordance with the definition offered by the OECD, namely as a ‘machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.’ [3]. Thus, while much ink has been spilt on attempting to tether artificial intelligence to human intelligence [4, 5] or attempting to distill the ‘essential qualities’ of such systems [6], the OECD definition recognizes AI not as one technology but as a set of technologies that enable inferential outputs in ways that do not necessarily engage with human oversight or control. The capacities enabled through AI systems is thus a departure from prior technologies which is enabled through human control and subject to human-defined parameters and intervention in varying degrees [7, See Annex B 8].

Media and academic research have shown that AI systems can be biased or discriminatory, impacting the most vulnerable and marginalised groups in society [9,10,11]. AI has also been blamed in contributing to and accelerating disinformation and misinformation, wherein one is increasingly uncertain on the truth or falsity of content encountered online [12, 13]. Even the much hyped generative AI, of which OpenAI’s ChatGPT was said to be the fastest consumer application to reach 100 million users, are facing mounting claims of copyright infringement and impacting the livelihoods of those in the creative industry [14]. Its promise is equally overshadowed by the potential of future advanced AI systems, what is known as artificial general intelligence (‘AGI’), to pose immense dangers to humanity, including potentially destroying humanity as we know it by outpacing human intelligence and taking over the control that humans currently have over AI systems [15]. On the other hand, more prosaic, but nonetheless serious challenges posed by artificial intelligence pertain to the misaligned promises versus the harms brought about from the increasing deployment of ‘algorithmic decision making’ or decision support systems within public administration [16, 17], including within critical sectors such as healthcare, education and social welfare. These in turn not only impact the access to these socio-economic rights but also disproportionately affect the most marginalised and vulnerable communities [18].

This list of threats and challenges, both current and potential harms, are similar to the nature of the spectacular (environmental) harms mentioned. These incidents and examples have been subjected to academic and industry research and policy debates on account of its palpable and detrimental impacts on human rights and freedoms, even as measures to address them remain contested, fragmented and in many cases, sorely lacking.Footnote 1 What such accounts leave out however are the grinding, gradual and attritional harms that take its toll upon the human rights framework, damaging amongst others, the normative justifications of human rights, including the foundational idea that underpins it, namely human dignity. This article aims to articulate and frame, through the lens of ‘slow violence’, the intuition gestured by various scholars and policy makers on why the AI harms discourse may go ‘beyond’ the human rights framework [19,20,21], reasoning that the conceptual misalignments are due to how AI unravels some normative assumptions that the human rights framework has hitherto taken for granted.

The article proceeds as follows. Part 2 analyses the slow violence of AI on the human rights framework. It does so by first examining it from the level of the individual, followed by its impact on the normative justification of discrete rights, focusing on the right to privacy, freedom of expression and the freedom of thought. It then moves one level ‘upstream’ to examine the slow violence of AI on the normative justifications to the idea of human dignity that underpins the foundation of the human rights framework. Part 3 in turn examines existing proposals that can address this form of slow violence and sketches a renewed model of human rights accountability operable in the age of AI by highlighting three areas of focus. The article argues that firstly, an agile approach to governance informed by a sociotechnical lens in analysing AI systems is key, as it can prevent the over or under regulation of discrete technologies based purportedly upon its immutable features or ideas driven by the inevitability of technological determinism. This closely ties in to the question of what human rights are for. Secondly, a dynamic and iterative form of human rights impact assessment is necessary in order to detect, address and mitigate new forms of harms and vulnerabilities that arise from the deployment of AI systems. Third, a potential reversal of the burden of proof, alongside already called-for heightened obligations for AI developers and deployers, may be a necessary step to prevent the thwarting of accountability for AI harms.

2 Examining the slow violence of AI on the human rights framework

2.1 At the individual level

It is a truism to assert that the individual is the key subject within the human rights framework. Emerging from the ashes of the Second World War and the Holoucast, the human rights framework, starting with the Universal Declaration of Human Rights 1948, serves to address the ‘barbarous acts which have outraged the conscience of mankind’Footnote 2 by putting in place an international mechanism to guarantee certain minimum protections for the individual. Buchanan argues that the focus upon conferring rights to the individual was not coincidental, as events preceding the international formalisation of human rights showed blatant disregard towards individual worth, tethering instead individual worth to group based (national, racial) affiliations [22]. The Universal Declaration of Human Rights in 1948 thus oriented human rights protection around the individual, as ‘objects of international concern in their own right’ [22], upending the hitherto individual locus standi within international law as an ‘object’ of diplomatic state to state relations.Footnote 3 Beyond tracing the prominence of the individual to this relatively ‘short’ history of human rights, human rights philosophy also traces the prominence of individuals within the human rights framework. This line of scholarship argues that contemporary human rights merely reflect and codify pre-existing moral rights [on moral values and alignment for AI governance, see 23], rights which are therein only possessed by individuals [22, p. 245].

However, the promise and unbridled potential of empowering the individual through the human rights framework are increasingly being challenged by the design and deployment of AI systems. The rapid technological advancement in the 21st century, dubbed by the World Economic Forum as the Fourth Industrial Revolution, heralded in an age where digital technologies, including big data and artificial intelligence are increasingly deployed in our societies in order to optimise economic performance, improve public administration, and bolster scientific progress but also as a modern means of communication and way of life [24, see also 25, 26]. This development entails the datafication of phenomena in the physical and virtual worlds to computational forms. According to Mayer-Schoenberger and Cukier, the process of ‘datafication’ involves the transformation of phenomena into data - wherein it can be quantified in order for it to be tabulated and analysed [27]. Datafication of phenomena similarly involves the transformation of aspects of one’s actions, behaviours and personality into data [28]. Others argue that this extends to informatisation, wherein the body is turned into ‘anchors’ of datapoints through biometric technologies [29].

Datafication of human life is not in itself problematic. As improvements within medical research and most recently, the covid-19 pandemic will show, there are palpable advantages in using data to know and therein managing threats to health, safety and security. Access to social and economic goods are also facilitated through datafication, bringing important advances to human well-being. Governments around the world are embracing artificial intelligence, designing national AI strategies to harness the potential of the technology. In turn, the datafication of various sectors such as recruitment and education enable more optimum use of resources, reduce bias and prejudice and enable the personalisation of services, all of which ultimately bring ostensible benefit to individual stakeholders [30].

At the same time, AI has been demonstrated to be biased, especially against minority and vulnerable populations. Scholarship has traced that AI facial recognition systems performed much poorly on minority populations such as African Americans, and especially women [11], when compared to other demographic groups [31]. Blind trust in data-driven healthcare can also cement long-standing prejudices, for example the lack of attention towards pain experienced by women [32]. In turn, AI systems have also been used to surveil and monitor [33], and in some examples, even deny vulnerable populations of their social and economic rights [17]. In other examples, AI systems within healthcare [34], education [35] and recruitment [36] have shown biased results when deployed upon the most marginalised segments of society.

Thus, the potential of AI is marred by what the AI ethics community terms as ‘AI bias.’ However, while bias is an ongoing challenge, the measures taken to address this is gaining promising traction, both within industry [37,38,39] and policymaking [40]. While a persistent concern within the AI community, a tunnel-vision in confining the harms of AI towards individuals as one comprising only of bias however hides the more prescient challenge it poses towards the individual as the subject of protection and empowerment of the human rights framework.

First of all, harms from AI systems, applied across changing sociotechnical environments, can appear to be less perceivable, less understandable and therefore less foreseeable for the individual [19, 41]. On the one hand, social media algorithms that nudge and filter content personalised and tailored to the individual might mean that one is increasingly unable tell whether one is being manipulated [42] nor discern the commercially driven business model behind content curation [43] that increasingly enable the shaping of worldviews [44]. In turn, ‘black-box’ algorithms, machine learning artificial intelligence systems using deep-learning, deployed within consequential sectors of insurance, banking, healthcare and social security, detrimentally impact citizens’ right of access to healthcare, social security and other safety nets without the former necessarily knowing why [45]. The opacity of the workings of the algorithmic model and the paucity of information that accompanies it impacts the individual’s right to contest a decision or seek for remedies. The hype behind generative AI models such as DALL-E and ChatGPT has in turn systemically stereotyped and biased certain groups, even as no individual per se is harmed [46, 47]. This relates to the fact that AI systems expand the taxonomy of harms beyond the types of harms typically accommodated under human rights law, namely harms and violations that are exogenous, salient and reasonably foreseeable [41]. In turn, harms from AI systems can be representational, wherein these do not necessary affect or impact specific discrete individuals but pertain to how individuals are represented, read and perceived by AI systems [48]. Representational harms of such nature are not easily amenable to accountability mechanisms offered by human rights law [49]. Thus, an AI system can perpetuate (biased) group-based representations, for example by ‘predicting’ potential suspects for fraud [50], criminality [51, 52] and persistently stereotyping certain groups in society [46]. Even as these systems gain widespread traction, the ability for individuals to address what are essentially representational harms is increasingly taken out of their hands. In this way, harms of representation are allowed to fester, even as the latter eventually facilitates further downstream harms to individuals. In addition to representational harms, the phenomena of deepfakes, synthetic voice and generated audio, visual and text content powered by the generative AI can destabilise the information sphere in which we belong, where both notions of truth and falsity can be up for grabs. Without even needing to form false beliefs [53], the destabilisation of a shared information sphere can similarly destabilise individual sense-making, posing potential adverse impacts to elections and the democratic order in general [54].

The multiplicity of ways in which an individual is disadvantaged and negatively impacted challenges the straightforward form of individual empowerment encompassed within the human rights framework. The undoing of the empowerment of the individual to articulate, prove and call harms to account is a form of ‘slow violence’ wrought by AI to the human rights framework.

The difficulty of articulating and knowing the contours of harms caused by AI systems can in turn compound the difficulty in seeking for accountability, thus detrimentally impacting the right to effective remedy. For example, while much media coverage and policy attention has focused on biased and discriminatory AI systems, individuals themselves have ironically faced an uphill battle in attempting to prove or articulate these harms as at the individual level. The lack of transparency, the complexity of AI systems, its relative autonomy and dispersed nature can dilute causation to mere influence [55]. It can be challenging for individuals to uncover where in the AI system pipeline which caused the harm or to discover the reason it did so. Pasquale famously criticised certain AI systems, especially those relying on the machine learning technique, as ‘black box’ systems wherein not only individuals, but developers of those very systems find it difficult to explain the outputs and predictions of AI systems [45]. Even if individuals do experience harms, the impacts may be too minor or merely come across as an inconvenience, thus discouraging or negating the need or hassle in calling them to account [56].

For example, when it comes to addressing bias and discrimination by AI systems, it has been non-governmental organisations, human rights institutions and academic researchers who have managed to highlight problematic forms of biases that persist [9,10,11, 57]. In other words, the capacity to prove human rights harms is increasingly moving from individual hands to organisations that possess the means and resources to look into the big picture, by stepping back and showing a statistical demonstration of bias through a comparison of datasets (that are either real or projected). A counterargument is that the reliance upon civil society and similar networks to assert human rights claims is not such an anomaly [58]. Non-governmental organisations have been at the forefront of the human rights movement from its very inception, at times even over shadowing the role of the state [59].

However, this argument is not meant to denote a dichotomous relationship between the individual versus civil society. Instead, the disempowerment of the individual in addressing harms from AI pertain to her inability to understand her own condition within systems mediated by AI. Thus while the pursuit of human rights claim-making has always benefited from the solidarity of NGOs, the claim I am making here is a qualitatively different one. Such forms of claim-making did not preclude the knowledge and understanding of the individual herself of her condition of existence nor the experience of harm. While the individual might have been powerless or financially hindered from pursuing human rights justice without the helping hand of NGOs, especially when faced against powerful corporations or authoritarian states, the individual herself understood fully well the impact and detriment of alleged violations upon herself through her first hand experience of encountering those harms. The role of the NGO, in other words, was complementary and indeed, built upon individual knowledge and pooled such knowledge for effective claim-making. Thus, while NGOs represent more power, the individual herself was not deprived of the power of understanding her environment, surroundings and conditions of harm.Footnote 4 In contrast, scholars have likened the AI-driven isolation through personalised content, the black-box nature of deep learning algorithms, the lack of transparency and therein accountability gaps that the digital condition introduces as a form of hermeneutical injustice. Originating from Fricker’s work on epistemic injustice [60], the term denotes a condition where an individual is increasingly ‘dispossessed of the interpretive tools, concepts and even words to make sense of the world and of one’s experiences’ [61].

In turn, while the outflow of epistemic resources from the individual is arguably addressed by relying upon institutions such as civil society groups, national human rights bodies or even acdemia, it can even be an uphill task for these relatively better resourced organisations to access actual datasets or the algorithmic models due to the protected trade secrets or sheer lack of will to cooperate [62]. The example from Propublica, who examined the COMPAS recidividism algorithm used within the US justice system shows, the many steps it took in order to audit the system, from having to ‘build a new dataset by merging various sources of information’ to getting ‘access to the predictions of the actual system,’ [63, p. 72] and having to ‘define the labels the system is expected to infer and annotate the dataset to compare the system’s prediction with these labels’ [63, p. 72], all involve highly technical endeavours beyond the ordinary capacities of many organisations, including those with specific mandates to uphold the protection of human rights such as national human rights institutions.

This example unpacks slow violence exhibit one, namely the dispossession of the individual’s ability to call into account human rights harms, undermining one key premise (and the key subject of protection) of the human rights framework, the empowerment of the individual. While the human rights framework promised to empower the individual on account of her inherent dignity (see Sect. 2.3), the increasingly widespread use of AI systems can systematically undo this empowerment as individuals are less able to understand algorithmic mediations enabled through datafication and call into account conditions that afflict them. It is one thing to lack adequate understanding of their own situation, it is on the other hand, quite another when new forms of harms enabled through AI systems [64], including, as we have seen, harms that are representational in nature, is based not upon ill-will but due to systems based affordances. These engage in turn with an interplay of causes (such as unrepresentative data, poor target variable specification, lack of diversity, implicit worldviews, existing prejudices) wherein human rights can only partially address some causes and not others.

In other words, the individual lies increasingly at the periphery when the logic of datafication holds sway. However, in making this claim, it does not mean that individuals are entirely disempowered. It is not an all or nothing situation. Where harms are experienced at a personal level, for example where the individual is aware that an AI system was used in decision making and where the individual was affected in a consequential or detrimental manner, both data protection lawsFootnote 5 and the human rights accountability mechanism, including discrete rights to equality and non-discrimination, privacy and the right to an effective remedy, can be called upon as a means of accountability. However, at the same token, the shrinking ability for the individual to call into account potential breaches of human rights is a gradual and attritional form of slow violence to the framework as individual empowerment and accountability, arguably the very raison d’etre of individuals having rights, is slowly unraveling. Further, the use of AI systems increasingly permeate every corner of life – varying from entertainment to entitlements, thus not only the workings of the system can lack transparency but also the very fact of its use can be transparent (read: unknown) to the user [66].

The following sections will move deeper into examining how AI undermines the foundational justifications of specific rights (e.g., the right to privacy, freedom of expression and the freedom of thought) and the normative foundation of the framework itself, namely human dignity.

2.2 Upon the normative justifications to discrete human rights

Much like how the individual as the subject of human rights protection and enjoyment is being disrupted by AI systems, these systems are also challenging the normative justifications behind certain key discrete rights. This section probes how the assumptions and implicit purposes behind discrete human rights are being unraveled by slow violence. However, it is beyond the scope of this article to look into all human rights but we instead examine three exemplars – key rights impacted by AI systems, namely the right to privacy, freedom of expression and the freedom of thought. However, in making this argument, I am not saying that the normative justifications behind these human rights are fixed, timeless and never changing. Human rights have after all been described as ‘living instruments’,Footnote 6 hence the reasons in which we value human rights can change as society undergoes changes, whether it be due to technological, political changes or social movements [67]. New rights may also emerge from these developments, the right to not be subjected to algorithmic decision making [68] and the right to be forgotten being key examples [69]. Instead, this section makes a more nuanced point. As the example will demonstrate, even as the coverage of the right changes through time, the deployment of AI systems can disrupt the landscape of normative justifications offered thus far for those rights. This is a form of slow violence as it unravels the utility of the rights in question, leaving these rights brittle in light of challenges posed by AI.

2.2.1 The right to privacy

Starting with the right to privacy, it is a truism to say that the right to privacy has been changed and shaped by technology. The right to privacy gained earnest popularity as a concept as a consequence of the invention of photography and the print media such as newspapers, where privacy was popularised as the ‘right to be let alone’ [70]. These technological turning points led Brandeis to fear that ‘what is whispered in the closet shall be proclaimed from the house-tops,’ [70, p. 195] hence heralding the normative justification for the right to privacy as a respect for personal space in which no one should unjustifiably invade. This justification to a large extent still holds true today. In turn, computer databases in the 1950s which enabled data storage and searching and the subsequent networking capacities and data-sharing redefined the right to include informational self-determination, namely the right to self-determine when, how and to what extent information is shared about oneself to others [71]. When profiling practices expanded due to the wide availability of data, the idea of privacy similarly expanded to resemble a form of public good. Going beyond the remit of privacy as a private good that can be traded off and dispensed with [72], Agre and Rotenberg argued that privacy should encompass the ‘freedom from unreasonable constraints on the construction of one’s identity’ [73, p. 6]. Here, ‘control over personal information is control over an aspect of the identity one projects in the world’ [73, p. 7]. In turn, the pervasive access to data across contexts, including through social media, led Nissenbaum to argue for privacy as a form of ‘contextual integrity’ wherein information deemed appropriate in one context should be restricted in others [74]. Thus, while information can and should flow in the digital age, the direction and degree in which this occurs should be based upon normal individual expectation of contextual appropriateness.

In short, the pie of privacy grew with time alongside technological developments, with many of such ideas now being key features of data protection law. For example, the European Union’s General Data Protection Regulation (‘GDPR’), regarded as the ‘gold standard’ of data protection law [75], incorporates rights of data subjects such as the right to information, erasure, rectification, and the right not to be subjected to automated decision-making.Footnote 7 In turn, those processing data have duties to comply with the principles of data protection – including that of lawfulness, transparency, purpose limitation, accuracy and accountability amongst others.Footnote 8 It can be argued that slow violence may not be occurring here, seeing that the right to privacy evolves and changes with time and technological developments.

However, with the advent of AI, even the putatively novel accommodations by the concept of privacy through data protection is showing signs of aging. The focus on the permissible uses of data to account for data-related privacy harms cannot adequately address the modulatory affordances of online platforms that mediate the exercise of decision-making and autonomy [42], facilitated in large part through its ‘surveillance capitalism’ business model [43]. The harms of slow violence to the right to privacy is seen through how the prior assumed, albeit unstated condition of unbridled freedom and space to construct one’s identity, change one’s mind, make decisions and choices, are now increasingly modulated through the widespread use of digital technologies such as AI [76, 77]. Technological mediations driven by AI systems increasingly permeate different facets, from the professional, to the personal, to the political, resembling a form of infrastructural affordance of modern communication technology. Thus, social media platforms have been denoted as ‘digital publics’ [78, 79], bringing to fruition the Habermasian model of the public sphere enabling citizens, albeit digitally, to debate and define the shape of the polity [80]. In turn, images, text and audio that are used as benchmark datasets to train AI models, define what counts as a good enough standard for ground truth and accuracy, resembling in essence an infrastructure of modern communication [47, 81].

Further, the expanded utility of new AI systems such as facial recognition technologies, including through remote biometric recognition, challenges not only the privacy rights of discrete individuals but also the foundations of the democratic order. The increasing deployment of such technologies have been argued to be a form of mass surveillance, with the potential to lead to chilling effects such as adversely impacting the freedoms of expression, association and the exercise of other rights [82, 83]. Although it has been problematic to measure the harms of mass surveillance as it can appear intangible and tricky to express in terms of concrete measurable outcomes [82] or counterfactuals, it is precisely in the gradual normalisation [84] of such technologies that the normative premise of the right to privacy can be unraveled through ‘slow violence.’ In other words, the premise of the right to privacy, namely that we are masters in determining how and how much we present ourselves to the world around us, arguably no longer pertains. With datasets and the dispersed use of AI in communications and even public life, the infrastructural affordances it entails means that privacy is being increasingly hollowed out at its core. This occurs through the slow violence of displacing the assumption of unbridled freedom that shaped the contours of the right to privacy.

2.2.2 The right to freedom of expression

It is not only the normative premises of the right to privacy that is being unraveled. Lazar argues that the right to freedom of expression and information are similarly being displaced by widespread mediation of the content we encounter [85]. While it is the case that freedom of expression rights, for example within online platforms, can be more straightforwardly challenged by the unjustified removal and censorship of content,Footnote 9 the larger underlying normative premise being challenged go beyond the right to express oneself, to potentially disrupting our shared communicative environments. Even with the putatively wide coverage of the freedom of expression within Article 19 of the International Covenant on Civil and Political Rights 1966 which guarantees the ‘freedom to seek, receive and impart information and ideas of all kinds,’Footnote 10 the harms towards the foundational conditions for freedom of expression in the AI-mediated age can be harder to express. For example, do we have a right of reach and how far can the advertising driven business model be determinative of this putative right, beyond concerns of political expression? What limitations can be set in place and should the current scenario, where it is businesses that determine the reach of expression, remain?

Indeed, while freedom of expression clearly prohibits the unjustified removal of speech, the corresponding problematique of the digital age is flipped on its head. Thus, instead of the enabling normative premise of the freedom to express oneself and to gather information, the corresponding foundational premise of the digital age is that we belong in an age of informational abundance. These engage two separate and contrasting normative premises – namely on the one hand, to encourage expression on the assumption that expressionFootnote 11 and the seeking of information as an unalloyed good while on the other, to manage expression in an age of information abundance. The latter can entail curation or other forms of management of expression in order to make the very notion of expression meaningful for the individual. An individual drowning in informational abundance does not seem to serve the intent or purpose of the freedom of expression. Additionally, freedom of expression was premised upon and valorises the human aspect of expression. In an AI-mediated age where content can be mass generated at scale (with or without direct human involvement) and where the information sphere can be potentially flooded with information of unknown or doubtful provenance, the contours between expression deserving of protection as a matter of human right and what falls outside of this form of protection is unclear. On the one hand, anonymity and anonymous content has served the cause of human rights, including by protecting the identity of human rights defenders from possible repercussions. On the other hand, information of doubtful provenance, especially at scale, can undermine the foundational reasons we attach to the importance of freedom of expression – namely in respecting the autonomy of human beings as both listeners and speakers [88].

Lazar proposes that nothing less than a rethink of the purposes of human expression is required in the digitally mediated age. Where communicative environments are increasingly mediated by AI systems and controlled by a monopoly of private power, it is not only the freedom to express that can be imperiled but also one’s sense of autonomy and identity [85]. The slow violence that is unraveling the normative foundations of the freedom of expression and information is not lost on key policymakers, including on the level of global governance. The United Nations Global Principles For Information Integrity recommends five principles in order to uphold information integrity in the digital age. The five principles – on societal trust and resilience, promoting healthy (business) incentives, public empowerment, independent, pluralistic and free media and transparency and research, all point towards enabling a healthier information environment. It focuses on how this environment can be made more robust, precisely in order to enable the meaningful exercise of the freedom of expression and information.

Amongst others, it rightfully identified that the specter of mis and disinformation in digital spaces cannot be straightforwardly addressed by only using the language of rights but should instead engage a multipronged response. This can include investing in local media journalism, identifying information provenance strengthening transparency. This is a tacit acknowledgement that the normative justifications of the freedom of expression and information may at the very least need a rethink in the age of informational abundance and where information is of doubtful provenance.

2.2.3 The right to freedom of thought

Similar concerns pertain to the freedom of thought.Footnote 12 Although the right is less invoked than the freedom of conscience and religion, the freedom of thought has gained prominence in, as mentioned, new digital environments mediated by big data and AI [89]. The right has the normative goal of keeping the forum internum, the inner sanctum of our thoughts and our mind, free from interference and manipulation. It consists of three prongs, namely the right to keep thoughts private, the right not to be subjected to manipulation or interference in the exercise of the freedom and finally, the right not to be punished for one’s thoughts [90]. Unlike many human rights, the freedom of thought (and opinion) are absolute rights not subjected to a balancing of interests [91]. The absolute nature of this right means that it is a theoretically power means in which to address adverse impacts the use of artificial intelligence has on the freedom of thought. At the same token, the lack of invocation of this right reveals that while the theory is sharp, its operationalisation remains blunt.

The element of manipulation and interference can be considered as an essential core element of the right. Case law has shown that state driven propaganda machinery that aims to brainwash populations or re-education camps that actively aim to change undesired individual thoughts straightforwardly amount to a breach of the absolute right.Footnote 13 However, unlike such exercises of state power of compulsion over the individual, AI mediated digital environments do not seek to impose nor manipulate individual freedom of thought in quite the same way. In fact, the behavioural driven advertising business model of the majority of online platforms engages with individual thoughts, where these are expressed through likes, clicks and other metrices of engagement. Both parties, as it were, tango to the same music [92]. Despite this, Alegre critiqued the capacity of such platforms, armed with data that are both personal and non-personal, to target not only advertising but also content and therein enabling the shaping of not only behaviours but also thoughts [93]. While personalised content is not in itself problematic, the ‘surveillance capitalism’ business model that underpins it depends on continued engagement, including through serving divisive, sensational and polarising content granularly targeted to individuals, at specific times and intervals [94, 95]. While social media platforms have been accused of fueling misinformation and disinformation that can potentially fracture the democratic order, no less important are how and where we should draw the line between manipulative and permissible influence in highly mediated online environments so as to sharpen and clarify the normative justifications on why we protect the freedom of thought [96]. The slow violence here, in other words, pertains to the assumption of unmediated environments for the freedom of thought. It was assumed that interferences to this absolute right can only ever be external, as in coming from an external party (e.g. state brainwashing; ‘re-education’ camps) towards the rights holder. The highly mediated - including through the use of AI - environments we find ourselves in today means that it is difficult to identify when a breach or violation of this particular right occurs.

Instead, what the slow violence framing reveals is that in accounting for the mediated freedoms enabled by AI-driven technologies, the traditional human rights focus on the ‘freedom from’ interference should move towards an actively designed enablement of conditions for freedom – a ‘freedom to.’ The expanded affordances of AI systems call for a renewed focus to think through larger normative implications and for private corporations and state actors to engage with or design technologies with a focus on keeping the ‘freedom to’ exercise autonomy, explore identities and enable the freedom to think. To be sure, the positive obligations called for here are not novel to human rights. Both the framework and case law clarifying the human rights provisions implicitly and explicitly call for duty bearers such as states and but also businesses to take on positive obligations to ensure the fulfillment and respect of their human rights obligations [97, on positive obligations generally, see 98]. The latter includes due diligence obligations on businesses to assess, address and mitigate human rights harms that could occur and where violations have taken place, for remedies to be made available to those affected [99]. However, despite the established practice of businesses carrying out due diligence, the design and deployment of AI systems introduces one novel aspect into the equation, namely the entanglement of values, biases and politics into the ‘product’ itself, the AI system [10, 11, 100, see generally 101]. The due diligence obligations should thus go beyond ensuring that human rights are not breached in business value chains to, in addition, ensuring that human rights are enabled and afforded through the values, business model and the very design of those AI systems. This is to ensure that the normative ideals that the right to privacy, freedom of expression and the freedom of thought aim to protect remain open and available, even as modern lives are increasingly intertwined with the technological [102].

2.3 Upon the normative justifications to human dignity

Exhibit number three on the ‘slow violence’ of AI and its impact on human rights moves us further into examining the impact of AI on human rights, this time on its normative foundation, namely human dignity. To reiterate, the article served to examine the slow violence of AI towards the human rights framework – where the framework pertains to who the subject of human rights protections are for (Sect. 2.1), what they are (Sect. 2.2) and why we have them. While the concept has been argued to be unwieldy, it is at the same time a legal concept [103], a recognised rightFootnote 14 and has been clarified by case law [103, 104].Footnote 15 The Universal Declaration of Human Rights 1948 recognised the ‘inherent dignity and of the equal and inalienable rights of all members of the human family’Footnote 16 as the foundational starting point of the human rights framework. In turn, human rights instruments from various regions, from the EuropeanFootnote 17 to the AfricanFootnote 18 to the AmericasFootnote 19, acknowledge the importance of recognising the inherent dignity of the human person as part of the ‘foundation in which the superstructure of human rights is built’ [103, p. 3].

In turn, where this foundational concept of human dignity has been operationalised, through case law and treaty interpretation, the starting point has always been the (assumed) primacy of the human being and the inherent worth attached.Footnote 20 The concept has served a purposive role [104], for example, guiding the interpretation of case law to confirm that human beings must not be used as a ‘means to an end’;Footnote 21 for certain vulnerable classes of persons to be protected as a matter of human dignity;Footnote 22 and to flesh out the normative justifications of specific human rights such as the prohibition of torture, non-discrimination, privacy and even economic, social and cultural rights [105].

Exhibit number three on the slow violence of AI unpacks how even the putatively wide concept of human dignity, as the foundational underpinning of human rights law, is facing novel challenges posed by AI systems. While human dignity, as mentioned, has assumed the primacy of the human being by placing the human at the front and center, this is being challenged by a decentering of the human being [see also 106], facilitated by emerging technologies such as AI.

First, the increasing spread of datafication as a practice and ideology, encompassing data-driven decision making and support systems across sectors as diverse as social welfare, education and healthcare, amongst others, reflect the thinking that insights revealed through data as a reliable and trustworthy indicator of social phenomena [107]. As we saw in 2.1, words, locations, interactions, interests and other social and natural phenomena can lend itself to datafication [27]. This in turn becomes a new modality to make sense of the world around us as the perceived objectivity of data is argued to reveal insights otherwise too cumbersome and slow for humans to unearth and process [108].

Data-driven decision making has also been argued to reduce ‘noise’ introduced through inconsistent human decision making [30] and to result in decisions free from individual prejudice and bias. However, placing blind faith on datafication and its promise through data-driven AI systems is misplaced. Examples abound on how AI systems deployed in various settings have brought about detrimental, biased and discriminatory outcomes - as seen through the controversial childcare benefits and tax fraud scandal in the Netherlands [109], the A-levels examination fiasco in the UK [35] and the biased algorithmic healthcare determination in the US [34]. Going beyond the minutiae of these individual examples, a more general point can be made. Datafication that decenters the human being and the (socially embedded) human experience is a recipe for disaster. In those examples, the technologically determinist stance of blind trust upon data-driven insights informing the predictions of AI systems saw it embedding societal bias, exacerbating discriminatory outcomes and creating systemic injustice.

While we have examined the concern the use of AI has upon the individual’s epistemic condition (see 2.1), contextual separation enabled through the infrastructural embedding of AI through the use of benchmark datasets also raises a human dignity concern. When examining the ImageNet dataset, one of the primary benchmark datasets for image classification, Balayn and Gürses argued that ‘labels such as “orphan” and “professor” have been used, but they do not objectively map to visual properties of someone or something’ [63, p. 84] They further asked: ‘(i)s it reasonable to assume that someone’s job or orphan status can be inferred from a simple picture?’ [63, p. 84] Thus, it is not only the individual who are unable to easily challenge such classifications, it pertains rather to the fact that such ‘benchmark’ classifications, used as a standard for gauging accuracy of image classification, do not map to subjective realities (of those persons) and freeze frames individuals as putative algorithmic ‘types’, undermining self-understanding and choice-making that is essential for self-governance, both of which in turn are key elements of autonomy. It is a reduction of the individual, read through its component parts, in ways that necessarily challenges the normative underpinning of human rights, namely human dignity.

Second, the decentering of the human being is also taking place through the co-option of technologically driven visions for the improvement of humanity that places center-stage on technological prowess. The latest iteration of the technological promise is through generative AI, popularised by the large-language model ChatGPT [110] and image generators such as DALL-E. The tunnel vision focused on technological prowess has seen corporations involved sketching out a vision for humanity (while at the same time admitting its potential to wipe out humanity as we know it) [15] that lacks participatory equity. This is seen not only through the fact that technology companies have dominated the technological and governance space [111], but also due to the lack of a focus on the primacy of the human being, implicit to the concept of human dignity. The latter is being undermined for example through how ‘users’ of such technologies are treated as ostensible test-subjects of the technology [112]. It is users that have to self-assess the risks involved, for example in gauging the truth or falsity of content, the provenance of information and to manage risks associated with an informational environment increasingly populated by synthetic content. In turn, the potential risks for disinformation, manipulation and the general poisoning of trust and reliability of generated content are placed on their shoulders [113,114,115].

While touting that it can amount to ‘the greatest technology humanity has yet developed’ in ways that can ‘change society as we know it’ [116], the treatment of ‘humans-as-a-service’ in this way undermines the very basis of human dignity – one where human primacy and its interests take center stage. Large language models are but a springboard to the self-professed vision towards the creation of artificial general intelligence, a term of art to express the capacities of machines that can equal or exceed human intelligence. This in effect treats humanity as a testbed for a vision for humanity in which they play no part in shaping and have no say. This a form of slow violence to the human rights framework. While the Kantian notion of not treating people as a ‘means to an end’ has been a central tenet in the interpretation of human dignity, these have tended to concern individuals or groups with a shared vulnerability.Footnote 23 Technical experimentation of this scale (and self-admitted risk) have not yet been analysed from the human dignity lens even as the decentering the interests of the many in this manner negates the primacy of the human being that is hitherto intrinsic to the very idea.

What these three examples of the slow violence of AI goes to show is that despite the promise of the international human rights law framework in empowering and centering the interests of the individual against potential abuse, individuals are nonetheless being rendered more vulnerable through AI systems, in ways they cannot adequately understand, articulate or call into account. While this straightforwardly makes it onerous for individuals to assert their rights and call for its protection, Sect. 2.1 to 2.3 have demonstrated that the deeper reason the individual is unable to do so is due to the unraveling of the normative reasons that has underpinned the focus on the individual, the discrete rights themselves and the foundational idea of human dignity. This change is gradual, mostly passing by undetected and is grinding and attritional, undermining the relevance of the human rights framework in potentially countering harms in the digital age and putting the framework at an inflection point on its continued suitability and robustness. Section 3 offers a modest look at how the ‘slow violence’ towards the framework can be addressed through a reassessment and expansion of the human rights toolbox.

3. A new model of human rights accountability in the age of AI: three areas of focus.

In translating the problematisation of the slow violence of AI to explore potential solutions, I will first briefly present some existing proposals that can minimise or counter the slow violence of AI towards the human rights framework. Malgieri and Pasquale argued that in light of the potentially serious challenges AI poses to fundamental rights, no less than a presumption of illegality is necessary. This starting point calls for a licensure scheme to be put in place for high-risk systems and shifting the burden of proving compliance at the outset to those developing the systems [117]. Elsewhere, scholars have re-examined the relevance of human rights [68, 118] in light of technological change. The author agrees with this general tenor. In light of the gradual, grinding and attritional challenges posed by AI towards the framework, nothing less than a clarification of the normative justifications of human rights is necessary. This can be for specific discrete rights, such as the ones highlighted in Sect. 2.2 or for the human rights framework as a whole. The former can mean that certain discrete rights, such as the freedom of expression and its underlying purpose of that right may need to be re-theorised in light of AI and digital technologies in general. The latter can include situating human rights within a larger multipronged policy response towards the challenges posed by AI for society as a whole.

However, in order to concentrate on actionable policy responses, three key areas of focus on human rights accountability is sketched here. While accountability has been mentioned in different parts of this article, its essential idea revolves around the fact that human rights are not just ‘nice to haves’ but is premised upon empowerment and enforceability. The framework was put in place to prevent abuses from stark power differentials [119]. Wrongs and violations are subjected to legal sanctions and accountability, even if the implementation of human rights protections tends to fall short in reality.

The first area to focus on pertains to the regulatory approach when it comes to governing fast moving technologies. While law has attempted to catch up to fast paced developments, regulatory approaches that ‘freeze-frame’ technologies at its particular stage of development in time can be both under and over-inclusive as it foregrounds the technology (and its current characteristics) in question to the exclusion of the sociotechnical harms that the technology engenders [64, 120]. These can impact upon the normative purposes of human rights. Instead, a sociotechnical lens to assess technologies takes into account how technology is designed, whose values are embedded or promoted in it and how it both reinforces power relations but also exacerbates inequalities in society [101, 121], may be more fit for purpose. It does not view technology or the technological tool in isolation. This is not in itself a new phenomenon [122, 123]. A sociotechnical perspective of technology centers not (just) the technology in question but also the societal effects it perpetuates. The latter impacts human rights, not only in terms of discrete and individual human rights per se, but is also intimately linked to the question of what human rights are for [85, 124, 125].

For example, the increasing use of AI-driven biometric and emotion recognition technologies within border and migration management has raised criticism on the possible biases of these systems, especially towards racial, ethnic minorities and certain nationalities. Thus, the well-trodden path of the biases of AI systems is also reflected in the deployment of such systems and research abound, including within the sector-specific area, on how to debias such algorithms [126]. However, the technical means to address algorithmic biases in contested sites of border and migration control belies a larger sociotechnical concern [127]. Fundamentally, Gürses argues, these systems ‘remain harmful regarding a very different type of issue as they are discriminatory by nature and in practice’ [63, p. 64]. As the logic of securitisation, accelerated post 9/11 and the European ‘migration crisis’, permeates the sites of border control and asylum, taking measures to tweak and debias the algorithm used in AI biometric systems in fact reinforces power over the most vulnerable. It shifts the focus from the structural assemblage of discrimination enabled by these technologies to a mere technological focus on debiasing one particular system. In turn, the lack of scientific validity over the objective nature of emotions [128] and the ostensible capacity of such systems to discern ‘biomarkers of deceit’ [129] introduce sociotechnical harms, such as the disempowerment of individuals and collectives, in ways that cannot be adequately addressed by correcting any specific AI system. A sociotechnical perspective brings forth the notion that addressing the technical element is only one component of assessing the appropriateness of technological systems and one that is woefully inadequate.

One way to accommodate the sociotechnical perspective into the governance of AI technologies is through the adoption of agile approaches to governance. The agile approach has also been called for, amongst others, by the UN Secretary-General’s Envoy on Technology’s Global Digital Compact.Footnote 24 An agile approach to the governance of AI also entails widening the governance toolbox and this can be seen being reflected within instruments such as codes of conduct [see for example 130] and controlled environments for testing such as regulatory sandboxes [131].Footnote 25 Such an approach is also key as emerging and evolving vulnerabilities facilitated through the sociotechnical use of technologies such as AI cannot be adequately framed, categorised and ‘frozen’ in advance. The multilateral Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law by the Council of Europe provides an example of a relatively agile approach as it adopts a more flexible risk assessment approach towards the design and deployment of AI in comparison to the EU’s approach that adopts fixed categories of risk (to fundamental rights).Footnote 26 In addition, regulation through laws and binding treaties can be complemented by technical standard specifications where appropriate and a consideration of ethical aspects such as how the idea of justice or fairness should be understood and designed for in specific AI systems or more broadly, how human flourishing should be understood and pursued in the age of AI.

However, adopting a sociotechnical perspective for governance purposes are not enough. To this end, the practice of deploying human rights impact assessments (‘HRIA’) towards addressing potential impacts to human rights can similarly be applicable to AI systems. However, unlike how HRIAs are typically deployed at the initial stages to assess potential human rights impacts before a (business) activity or investment is undertaken, a ‘dynamic’ and iterative form of HRIA is required to address the potential impacts of AI to human rights, wherein these resemble less the observable and measurable forms of harms but evolving and increased vulnerabilities to human rights harms through its sociotechnical deployment and use [132, 133]. For example, classifications and predictions of AI systems that unfairly impact certain groups can sidestep the existing forms of protection afforded under the law. New vulnerable groups might emerge from algorithmic classification which do not fall within the confines of protected groups within non-discrimination law as algorithmic classifications are not informed by legal boundaries [134]. A dynamic and iterative HRIA can assist in detecting, addressing and potentially mitigating new harms that emerge only upon the interaction of data and inferences upon deployment [133]. In turn, while human rights experts have been at the forefront of pushing the human rights protection agenda to novel frontiers, HRIAs for AI systems should engage a multistakeholder approach going beyond the usual suspects [135]. Depending on how and where a system is to be deployed, technical AI experts, ethicists, affected stakeholders, vulnerable populations and human rights experts should all have a seat at the table. In turn, the inclusion of diverse stakeholders and voices are essential, alongside the necessity of asking of question zero, ‘do we need to deploy this AI system (for this task/field) in the first place’? This entails engaging with an even wider question, asking in terms of not what human rights are being impacted by the AI system deployed but into asking how technologies can and should play a role in our lives [136]. Footnote 27 While society can be ‘kept in the loop’ in this manner straightforwardly through legislative representation and the legislative outcomes that follow, debates and discourses should extend beyond the realm of legalism into engaging with the ‘sociotechnological imagineries’ [122] of human rights. In essence, this means defining a human rights vision that does not trail behind and play ‘whack a mole’ whenever violations occur, but one that ensures that technological visions cohere with affording and protecting conditions wherein human rights can take hold and flourish.

The third focus area pertains to potentially reversing the burden of proof, from placing the burden on individuals to prove human rights harms, to placing positive obligations upon corporations and others designing and deploying AI systems to take steps to account for human rights from the start. Where these measures fail to be taken, a presumption of responsibility would, in certain circumstances, apply. Such an approach is not per se novel. The EU’s proposed AI Liability Directive partly alleviates the burden of proof by adopting the ‘presumption of causality’, noting precisely that it is increasingly more onerous for individuals, due to reasons of lack of transparency, complexity and autonomy of AI systems, to demonstrate harm or call them into account.Footnote 28 In turn, the unlikelihood of fully eliminating unequal outcomes and bias from AI systems [137, 138] means that an accountability gap would arise should designers and deployers of AI systems not be held responsible [139]. Placing appropriate obligations upon actors with the power to affect potential harms is one key measure to be adopted to ensure that human rights are preserved and protected from the outset.

The key initial steps here are meant to sketch the contours of how the human rights protection toolbox should evolve and are by no means the last word on the matter. More research engagement is needed on the intersection between human rights law and the sociotechnical perspective in informing the regulatory landscape, the benefits and drawbacks of an agile (and sociotechnically informed) governance approach and how human rights impact assessments can account for dynamic harms and new vulnerabilities that emerge from deployment. The article provides a means in which to address the slow violence challenges raised and does not purport to exhaust the solutions space. Importantly, the measures proposed here does not negate the need to take a critical look at human rights, including through re-theorisation of the rights themselves and its underlying aims.

3 Conclusion

This paper has shown that in addition to impacts to specific human rights, the harms by AI systems to the human rights framework is one of ‘slow violence’, unraveling the normative justifications we attach to the focus upon individual empowerment, the discrete rights themselves and the foundational concept of human dignity. Such harms can be unseen, gradual and are difficult to account for within the existing toolbox of the individualist human rights framework. The slow violence of AI harms also undermines the normative justifications of the right to privacy, freedom of expression and the freedom of thought, amongst some examples we looked at in this paper, and human dignity in ways that make individuals and societies more vulnerable as it leaves the former without adequate means to articulate, understand or prove these harms. To this end, the paper proposed three key areas of focus requiring positive measures, namely: the adoption of a sociotechnical and non-static lens in assessing technology, the use of human rights impact assessment to capture evolving vulnerabilities and engaging purposively in affording human rights through AI and finally, a potential reversal of the burden of proof and heightened accountability measures, including through placing positive obligations on the deployers of such systems. The paper aims to start a conversation on how AI impacts human rights beyond the familiar frames of discrete individual rights and therein open up room to engage with human rights imaginaries, including through an expansion of the human rights vernacular and toolbox.