1 Introduction

As we know well by now, big data, artificial intelligence (AI), or other forms of automation not only bring new opportunities and potential risks to their users but also change the way people live together. This is the case across the many domains of application, be it in the use of robots in care or manufacturing processes, algorithmic management in the workplace, or new forms of predictive and diagnostic analytics with regard to health and disease (Björnsson et al., 2019, Braun, 2021).

In recent years, one idea has come to the foreground as a way to address the question of the goal of new technologies – especially artificial intelligence – as early as possible in the design and development process: AI should be human-centred. (European Commission, 2021a, European Commission, 2018, European Commission, 2020, European Commission, 2021b, High-Level Expert Group on Artificial Intelligence, 2019) However, Bannon (Bannon, 2011) and other scholars have pointed out that the term human-centred is often used in a generic way to encompass a range of distinct research themes without any commitment to a philosophy and overarching conceptual framework. This criticism does not mean that the concept itself cannot make an important contribution to ethical debates on dealing with artificial intelligence. Nonetheless, though many current policy papers use it, the notion remains rather vague and conceptually underdetermined (Kaluarachchi et al., 2021, Göttgens and Oertelt-Prigione, 2021).

At the same time, the core demand for a way to understand if and how a technology serves the well-being of people is widely recognised as socially and politically salient (Coeckelbergh, 2022). This is also one of the central points of current debates as to whether and, if so, how critical approaches should be incorporated more strongly into current debates on the ethics and governance of new technologies (Delanty and Harris, 2021, Waelen, 2022, Braun et al., 2023). In essence, critical approaches claim that every debate about the ethical and social aspects of new technologies is also a political debate about which voices should be heard, which bodies should be made visible or invisible, and which claims for recognition should be taken into account. We take up aspects of this debate by using these questions stemming from critical approaches as a way to responding to some of the pitfalls of the current debate around human-centred AI.

In this article, we use French philosopher Jean-François Lyotard’s theory of the “inhuman” as a heuristic guide to analyse what a more critical human-centred perspective might add to current discourses on governing technology.

After a brief first sketch of the current and historical context of human-centredness, we change focus (Part 1). Instead of attempting to determine what the essential elements and/or conditions of human-centredness are, we focus on determinations of the inhuman. We do this through an interpretation of Lyotard’s concept, dividing the inhuman into three aspects: performativity, vulnerability, and novelty. We suggest that reading and interpreting the inhuman in this way can help us better understand what we should be attentive to when discussing human-centredness and what is often ignored in discussions of the concept (Part 2). Based on these analyses, we will then ask what we can draw from Lyotard’s reflections for the conceptual determination of human-centredness. In a third and final step, we link this debate back to the initial question of critical theory by developing a first sketch of what can be gained from these considerations for the positive determination of human-centredness. We argue that the analysis of the inhuman helps us to take seriously the multifaceted nature of human vulnerability in relation to technology and AI in particular, which supports the development of a richer sense of human-centred AI (HCAI) (Parts 3 and 4).

2 Putting Human-centredness in Context

The notion of human-centredness can be critically placed in a broader context of reflections on human-technology relations. Understanding this longer trajectory allows us to situate the development of this discourse in relation to Lyotard’s own analysis, which will happen in the next section. Significantly, it gives an insight into what aspects of human life are foregrounded in the notion of the human-centredness in human-technology and, more specifically, human-AI relations (Hille et al., 2023, Salloch and Eriksen, 2024).

Our contention is that the longer history of human-centredness in technology assessment has occurred largely against a background of economic reason; and this continues to be the case. In this frame, human-centredness entails increasing the human capacity at the individual level to function within the economic rationality of a historically contingent socio-technical system; this remains the case in the current discourse surrounding HCAI. We argue that at present, this framing is particularly apparent in European Union policy, which simultaneously places strong emphasis on HCAI, but conceptualises it primarily in terms of economic rationality.Footnote 1

The broad question of whether specific technological advances have benefited or harmed the populations implicated, i.e. whether they have been sufficiently human-centred, is an old one, and no particular technology has escaped scrutiny. This holds for even the most human of technologies, writing. The French anthropologist Claude Lévi-Strauss noted in his seminal study Tristes tropiques that writing “vastly increased man’s ability to preserve knowledge. It can be thought of as an artificial memory, the development of which ought to lead to a clearer awareness of the past, and hence to a greater ability to organize both the present and the future” and at the same time “the typical pattern of development observed from Egypt to China, at the time when [it] first emerged: it seems to have favoured the exploitation of human beings rather than their enlightenment. […] My hypothesis, if correct, would oblige us to recognize the fact that the primary function of written communication is to facilitate slavery.” (Lévi-Strauss, 1992) Likewise, recent debate (Scott, 2018, Graeber, 2021) has focused on the somewhat misguided question of whether the invention of agricultural technology can be understood to have improved the human condition or rather, in the manner that Lévi-Strauss discusses writing, facilitated slavery.

If this question about the “human-centredness” of cereal crops or any other particular technology is misguided, it is because even if the inventions of cereal crop technologies effectively facilitated practices that we would now consider as forms of slavery, they did not do it on their own but within socio-technical systems of artefacts, practices, environmental conditions, and the many elective affinities (to use Weber’s poignant term) between them (Scott, 2018). A similar understanding of the importance of socio-technical (encompassing social, technical, political, economic, and environmental dimensions) systems in the normative analysis of AI and data-driven technologies is prevalent in contemporary debates (Crawford, 2021, Yu et al., 2023). From this perspective, all technologies can be critically analysed from a more holistic perspective in terms of their relation to human emancipation or its inverse. The salient question then becomes how emancipation is understood within the analysis and how the idea of human-centredness is linked to different spheres of social life.

The centrality of the economic sphere and the human capability to operate within this sphere is particularly present in the understanding of human-centredness that pervades the current discourse around AI. Moreover, even the fundamental rights approach that characterises the soft side of European AI regulation can be interpreted as primarily grounded in capabilities to participate in economic activity. The hard, legislative side as seen in the “AI Act” and other legislation takes a risk-based approach relying largely on industry self-assessment, more directly connected to economic rationality (Nature, 2024).Footnote 2 This interpretation is reflected in international policy from, for example, the UN Office for the High Commissioner for Human Rights (UN office of the high commissioner for human rights, 2020) or the EU’s European Economic and Social Committee (European Economic and Social Committee, 2022). Likewise, the interpretations of human-centred AI within data science gather around how AI can augment human capabilities. But in this picture, humans are understood primarily as workers and AI users, while also allowing them to retain sufficient levels of control over the AI systems that are augmenting their capacities (Kogan et al., 2020).

We can elaborate this point by noting how the current discussion about human-centredness maps onto the distinction between labour replacing and labour enabling technologies that is prevalent in the discussion of labour automation, one of the central social policy concerns relating to AI. As has often been extensively discussed by Frey (Frey, 2019), labour replacing technologies replace human with machine activities without opening further avenues that either improve the wellbeing of the people who are affected economically or in another way, e.g. through improved safety, life satisfaction, or the opportunity for capacity building, etc. By contrast, labour enabling technologies, while also taking over or complementing the activities previously carried out by humans, open up possibilities for capacity building or expression in ways that are valued by the persons affected (or, more generally, society), without diminishing the quality of life or social standing of those persons. Increased wealth (and its distribution) would be one, historically prominent, measure of this, but this would certainly not be the only possibility. Labour enabling technologies facilitate the development of human capabilities and values, inside or outside the formal sphere of work and employment, but in a socio-technical context that is largely structured by it. The process of enabling can go beyond simply replacing one activity and facilitating another. It could also entail a new form of machine activity that establishes new possibilities for capability building. Likewise, the replacement of one form of labour/activity might open up time for others, and in this way, the replacement itself is enabling even if the activity is entirely taken over by machinic labour; an argument made, for example, by André Gorz (Gorz, 2011), as well as more recently in techno-utopian literature (Bastani and Waldenström, 2020). Nonetheless, the economic sphere remains at the centre of the analysis. It is the human as an economic actor that is affected directly by automation and this reverberates into other dimensions of life. The current discourse of HCAI maps this long-standing discourse about labour enabling vs. labour replacing onto current developments in AI and data driven digital technologies.

The labour replacing vs. enabling distinction is most often discussed in relation to the labour market. But as a general heuristic for human-centred technology (or AI specifically), it pushes the scope of the concept beyond the strict confines of labour market dynamics, as it is highly debatable whether growth facilitation within the labour market should really be the sole or primary criterion for considering technology to be labour enhancing. While we might shift from understanding human-centredness in terms of labour replacing or enhancing to a broader and more qualitative concept of capacity replacing or enhancing, the basis in economic and labour thinking remains predominant, nonetheless.

The capacity replacing or enhancing analysis of technology also cannot take place in isolation from the broader shifts in socio-economic and value systems, which are themselves co-created with the introduction of various technologies. One of the best-known examples in literature is the introduction and widespread availability within technologically developed and affluent societies of household appliances for the mechanisation of certain aspects of domestic work (laundry and dishwashers, vacuum cleaning machines, etc.). Rather than creating more time for paid and unpaid domestic workers, the introduction of these machines in American households increased the amount of time spent on domestic work (Cowan et al., 1985). A plausible explanation for this counter-intuitive dynamic is that the mechanisation of these forms of labour led to a value shift which created an affinity for more domestic labour, i.e. the mechanisation created the socio-economic conditions for a greater social demand vis-à-vis levels of, for example, domestic hygiene. In this case, potential emancipation from labour, a classical human aim, led to the creation of the socio-technical conditions for more efficient labour, which appears to be a modern human aim. The technological shift facilitated the creation of greater demands for efficiency and higher standards of hygiene. This can only be viewed as human capacity building in a rather narrow sense.

This points to the question of cui bono, i.e. who benefits from technological change. As Lévi-Strauss’s comment above makes clear, a technologically facilitated increase in human capacities, now perceived at the historical and aggregate level, always has winners and losers, to put it coarsely. Lévi-Strauss’s point is that writing, this most human of technologies, was initially implemented and likely designed to facilitate the most inhuman of practices: slavery. Frey’s analyses of the history of industrialisation likewise clearly illustrate that processes of mechanisation during the industrial revolution had clear losers (e.g. skilled artisanal and child labourers) and winners (capitalist industrialists). This changed or balanced out somewhat over time as productivity grew and increased the socio-technical capacity (e.g. public health) gradually translated into higher living standards for broader swathes of the population, though not without political struggle.

In brief, our contention is that the discussion of “human-centredness” in ethical and political technology assessment has occurred within the frame of economic rationality. The discussion of HCAI continues to drive this trend. In this context, economic rationality refers to a comprehension of human activity, work and time that has increased productivity, economic growth, or profit as its aim (Gorz, 2011, 109–110). Human-centred technology within the frame of economic rationality is largely understood in terms of the optimisation of the human economic performance – labour or capacity enhancing – within socio-technical systems that are ordered according to this mode of rationality. Or, it is understood as protecting the vulnerable human, through systems of fundamental rights or rights-based rules and legislation, from exclusion within the socio-economic system that would diminish the possibilities for labouring or capacity development. In other words, it seeks to impose some limits on technological development to ensure against excessive de-humanisation in processes of economic rationaliszation.

Our argument in the sections that follow is that Lyotard’s analysis of the “inhuman” as opposed to the human can help us to make some conceptual and practical sense of “human-centred AI” or, even more broadly, human-centred technology. These discourses arguably fall into the same conundrum as the humanisms Lyotard rebukes in his analysis: they assume the human as a stable and pre-given form to be protected without explaining what it is. Thus, an analysis of them through the lens of the inhuman will allow us to distill a critique of these approaches while remaining sensitive to requirements of socio-technical systems thinking. This should open different pathways to thinking about the sense of human-centredness is current debates surrounding AI (Coeckelbergh, 2022).Footnote 3 This first requires a deeper analysis of the idea of the inhuman that Lyotard has developed.

3 Lyotard and the Inhuman

In the 1970s and 80s, the French philosopher Jean-François Lyotard raised the question of what it means to be human in the face of the transformations brought about by the ICT revolution (Lyotard, 1984, 1991; Simons, 2022). However, rather than beginning with a plea or plan to preserve the human (or do away with it), in his introduction to The Inhuman: reflections on time (Lyotard, 1991), – the selection of essays where this topic is broached most directly – he begins by gently chiding his fellow philosophers for beginning their own defences of the human precisely by taking the question of what it is to be human for granted. Humanism continues to “administer lessons” about the value of humans without ever addressing the need to interrogate the subject of these value claims (Lyotard, 1991, 2). Lyotard, by contrast, suggests to examine the human by starting with two questions about its inverse, i.e. the inhuman: first, “what if human, in humanism’s sense, were in the process of, being constrained into becoming in inhuman?” and, second, “what if what is proper to the human is to be inhabited by the inhuman?” (Lyotard, 1991, 2).

Our claim here is that Lyotard’s analysis of the inhuman allows for both an original negative critique not just of AI or AI implementation but of HCAI as a form of discourse that remains largely within the frame of what Lyotard calls performativity. Second, Lyotard’s analysis provides elements for a positive critique in its emphasis on bodily vulnerability, accentuating of dissensus, and the preservation of the space for novelty. Mediating between these two forms of critique is the central insight that the human does not exist as a pre-determined entity having set characteristics and values that constructs a relation with the technological artefacts and systems that it builds. Rather, the human is to be seen as a relational entity that emerges within socio-technical systems. The preservation and protection of the human, if that is indeed an aim of HCAI or, more broadly, of human-centred technology, rely on this understanding. The following section will focus primarily on the negative critique, while the third section will build the positive one.

At the core of Lyotard’s analysis is the splitting of the inhuman into two parts that are each represented in the two questions above, respectively, one that is a systemic constraint on the human and the other that is proper to it (Lyotard, 1991, 2). The first, which goes by several names in his work, such as performativity, development, or efficiency, increasingly constrains the human by reducing the space for expression of the second part. This second part, the part proper to being human, nonetheless perdures as a remainder or what Lyotard calls a “secret” to which the human is always held “hostage”. For the sake of clarity, we will use the term performativity here to refer to the first dimension of the inhuman. These two senses of the inhuman must not be confused with one another, but understood in terms of their relation.Footnote 4

Establishing this relation is difficult because the project of the first sense of the inhuman entails forgetting or effacing the second sense. In other words, what the principle of performativity cannot assimilate, its remainder, the human’s secret, is forgotten or concealed. This forgetting or concealment however leads to discontent or dissensus for reasons that are linked to the relation between the second kind of the inhuman and the fact of human embodiment.Footnote 5

The investigation of the first form of the inhuman in Lyotard’s thinking goes back to the earlier investigations of the transformations being enacted upon epistemic cultures with the ICT revolutions of the mid to late twentieth century. As several commentators have argued (Sim, 2001, Mackay and Avanessian, 2014, Simons, 2022), Lyotard’s arguments in The Post-Modern Condition were not primarily about the decline of “meta” or “grand narratives” that oriented epistemic or cultural projects in the nineteenth and early twentieth century around the realisation of the “life of the spirit” or the “emancipation of man” and manifested themselves in various forms of liberalism, human rights doctrines, socialism, etc. Rather, they revolved around the assimilation of these narratives into the principle of performativity. The post-modern, often described as the end of the metanarratives, is not so much about the end, but rather about the shift of their function and role. As Crouch (Crouch, 2004) explains in another register (the discussion of “post-democracy”), the post-modern makes use of the products of modernity, in this case the metanarratives, but departs from them in terms of its drive or ends. The role of these metanarratives had been, in large part, to allow for normative conclusions to be drawn from the denotative propositions generated by the sciences. The metanarrative(s) allow(s) for an orientation and contextualisation of the evidence produced by the sciences.Footnote 6 What connected the metanarratives that once marked that modern period was their subject: the human qua autonomously judging subject. Performativity, by contrast, is not about humans, it is about saving time through the ever-increasing efficiency of processes as its own end. In a sense, this is a critique of industrial and post-industrial capitalism, but not only that. Capitalism is one historical manifestation of the principle of performativity but it does not exhaust the principle, especially insofar as capitalism functioned as well, if not even at least briefly better, as a socio-economic system within humanist metanarratives.Footnote 7 Nonetheless, Lyotard refers to performativity as the metaphysical principle of the socio-economic system “currently being consolidated under the name of development” (Lyotard, 1991, 2); the latter being the process of the increasing complexity and efficiency of information processing. Development occurs or is driven by a principle of performativity. The system of postmodern techno-scientific capitalism that characterises contemporary global society is a particular iteration of performativity-driven development but does not exhaust the term (Sebbah and Nancy, 2022).Footnote 8

Lyotard’s analysis also brings into question the order of ontological priority between the human as a technology user and technology as a human user (Lyotard, 1991, 12). This is key to his notion of the inhuman. By this account, all living systems are perceived as technical insofar as they filter information and respond to it in order to ensure their survival. What distinguishes humans from other types of entities in this class is the gregariousness and sophistication of their information processing capacities. The idea of the human subject as being technological in this broader sense, and, to some extent, even an artefact, does not belong solely to Lyotard but had already been well established in his intellectual milieu, in large part due to the broad cross-disciplinary reception of anthropologist Leroi-Gourhan’s work. The idea that technicity is not limited to humans or even mammals is present in Leroi-Gourhan’s findings in the 1960s: “technical action is found in invertebrates as much as in human beings and should not be limited exclusively to the artefacts that are our privilege” (Leroi-Gourhan, 1965, 237). Leroi-Gourhan argued that the capacities that are most closely associated with the specific form of human subjectivity – social memory, language, anticipation – are technical in this regard. What is important here is that technology is not (only) conceived as a tool that humans wield; technicity is seen as a milieu in which the human emerges and develops as a specific form.

In this sense, the question of human-centred technology is not a question about the relation of a fully formed entity that is taken for granted, i.e. the human, to its own artefactual creations. Rather, it gives a name to a mediation which shapes the emergence, development and stabilisation under particular constraints and parameters of a particular bio-technical form, the human. The inhuman in the first of the two senses named above refers to a human body that is optimised to the principle of performativity. The optimisation occurs by adjusting the socio-technical systemic conditions or processes of mediation by which the human emerges and develops. If we follow the account of human-centredness above, HCAI is the label given to the development of a particular technology (such as AI) that mediates the process of optimisation in line with the principle of performativity (Russo, 2018).Footnote 9

In contrast to the performative inhuman described just above, the inhuman secret that marks the human, what Lyotard calls the “human in humans”, is an initial “misery”, or, as we put it, a vulnerability (Lyotard, 1991, 3), stemming from the peculiarity of our species’ post-natal corporeal development. The human child in its prolonged state of embodied helplessness and “inhumanity”, if we compare this state to the rational, judging human subject of the modern metanarratives, is human precisely because its initial state of distress and vulnerability “heralds and promises things possible” (Lyotard, 1991, 4).Footnote 10 The initial long period of distress and vulnerability that marks the inhuman condition is what creates the sense of a future possible insofar as the insertion of this new inhuman creature into the human community opens the possibility for change within that community. This vulnerability produces what Lyotard calls an “initial delay” in humanity. This makes the child dependent (“hostage”) on an adult community; but continuously makes the inhumanity that it “suffers” from at its core apparent to that “human community”. The child, Lyotard’s “secret” which haunts and agitates adult humanity, is being educated in a process where the institutions of culture, qua technologies, “supplement” the initial state of lack or vulnerability: the development of a second, institutional and cultural, nature. This process occurs within the enabling constraints of the artefactual milieu of information processing wherein the human, both as individual and life form, emerges. The process of enculturation and technological mediation, however, does not occur without remainder. Individuals struggles throughout their lives to constantly conform to or negotiate institutions but also to restructure them towards the goal of living better together suggests. What persists from the initial childlike state of vulnerability, the inhuman, and manifests in adulthood, is the power to criticise and also the temptation to escape or reshape the institutional parameters of culture at a certain time through certain forms of activity: the arts, philosophy (Lyotard, 1991). In other words, the inhuman resistance to the performative inhuman now simply dubbed “human” stems from our peculiar corporeal vulnerability.

Lyotard continuously plays with the terms that he introduced to describe this inhuman condition. The promise of the initial distress is, in a conventional sense, the possibility of integration, reorganisation and also creation of institutions which allow the adult to claim their full humanity. The other side of the claim made by the adult is the recognition of the efforts of “consciousness, knowledge and will” by the community that shape and form said institutions. This also allows the “interests and values of civilization” to be interiorised, prompting the recognition of oneself by others and by institutions themselves as human (Lyotard, 1991, 4). What shifts in the transition to the post-modern that Lyotard charts are the “interests and values of civilization”; from modern ones based around the conscious, knowing and willing subject to post-modern ones grounded in the principle of performativity.

This vision of the “promise” of the inhuman entails a reconciliation of the inhuman and human, as culture forms a protective shell around the human coming into adulthood. By contrast, Lyotard’s project has been to indicate the ways that this unharmonisable aspect still shows itself in human life under various headings and spheres of life. This is the other promise or secret which goes under the name of “dissensus”, “event”, “thing”, or “differend”, but also “work” and “figure” in Lyotard’s writing.Footnote 11 He calls this play between the two senses of the inhuman promise the “conflict of inhumanities” (Lyotard, 1988, 5). This conflict has become increasingly salient in the “post-modern” period. One of its characteristics is that political and socio-economic decision-making is legitimated in terms of efficiency. It is characterised by a cultural system into which the promise of the inhuman is integrated through socio-technical institutions that do not function according to a ratio that centers on either the human in the traditional modern sense or in Lyotard’s revised one, all the while retaining the name.

Lyotard demonstrates that performativity is not a human principle of technology but that the human use of technology is subsumed under this principle, with efficiency and growth as the only effective regulatory ideals.Footnote 12 Butting against this first sense of the inhuman (performativity, development, efficiency) is the second, the unharmonisable remainder which thinking runs up against in philosophy and the arts, but also, for example, in claims of wrong-doing or injustice that are non-sensible within a specific institutional or juridical set-up (see fn 14). In these contexts, this remainder is not visible, not actively suppressed but simply invisible to this metaphysical principle and the institutions through which it operates as an organising principle. This presents what Lyotard refers to as a differend – the relation between two phrases that cannot be linked to one another in a comprehensible manner because they belong to different genres of discourse – in the contemporary notion of human-centredness within the “system”.Footnote 13 If HCAI is aimed at the optimisation of human capacities within the specific socio-technical system that Lyotard claims characterises the post-modern, then this optimisation is towards performativity at the cost of its remainder, and subsequently at the cost of the possibility for novelty that emerges in the conflict of the inhumanities mentioned above. For Lyotard, “genres” provide rules, proper to certain goals, for linking phrases together. But there is no universal genre. Linking phrases together according to a genre proper to certain goals is always a matter of judgment and exclusion of certain phrases as non-sensical.

The argument in The Inhuman aims to overcome the prejudice that there is a pre-given human that stands in relation to technology around which technology could be centred in such a way that the relation doesn’t shape the terms being related. By extension, the argument in The Differend is meant to refute the prejudice that there is a “human” that makes use of “language” towards its own ends. A failure to achieve those ends or to communicate means that the language needs to be used more frequently or better, in a similar fashion to a technology being used better which can align it better to goals that are conceived as objectively proper to the “human” (Lyotard, 1988, xii). The experience of the differend, the inability to link one’s phrases to the sensus communus, so to speak, in politics, arts, and philosophy, is an expression of the inhuman, mute, helpless, and childlike beneath the protective casing of culture and human institutions. In this context, the notion of human-centred technology and present iterations of AI due to its constituent features may be exemplary. While the human is not rendered non-sensical in this context, its sense is abridged to that part which makes sense to the principle of performativity, i.e. the enculturated adult that works as part of a project to optimise the social system, and/or the fulfilment of one side of the promise, while the other side, the inhuman of childhood vulnerability and indeterminacy, is subject to a differend.

What is left in this picture of politics and ethics, i.e. in the discourses in which “human-centred AI” is now situated? Lyotard’s analysis is that the possibility for some room for manoeuvre outside of the principle of performativity, that is for novelty and dissensus, emerges from the “debt” to the other form of the inhuman, the vulnerable child that we never quite pay off precisely because it cannot be assimilated into a social totality. This embodied vulnerability that characterises the inhuman remains a font of novelty. The human-centredness now shifts from the growth and development of capabilities within a system primed for efficiency to the maintenance of incapability, vulnerability, and even distress to the witnessing of the differend, or an in-human centredness.

What we have seen from the analysis of Lyotard’s introduction to the inhuman is that the concept has at least a double meaning within his work. On the one hand there is the inhuman of the metaphysics of performativity. The notion of a regulating third term seems to characterise well the current state of data-driven technologies and the discourse around AI in particular. AI promises to be the mediating term between the human and performativity: optimising the human towards the principle of performativity, while in its human-centred form, protecting the human during that process of optimization from the possible violence or disruption arising from too swift a transition. The protection is nonetheless in place so that the optimisation can proceed. The second form of the inhuman is the “miserable and admirable indetermination from which [the human] was born and does not cease to be born” (Lyotard, 1991, 7), linked closely to the themes of vulnerable embodiment, the differend, and novelty. This font of the human that cannot be fully incorporated into the casing of culture and institutions, or into a social totality that can be optimised in terms of the efficiency of its dynamics, is both a source of vulnerability and natality as it resists efforts to harmonise or otherwise incorporate itself into an institutional order. Witnessing this is the project of what Lyotard calls “philosophical politics” apart from the politics of intellectuals and politicians (Lyotard, 1988, xiii).

From the modern perspective, human-centred technology fits neatly within the accounts of “labour enhancing” or “capacity building” technologies. In this picture, technology plays the role of the mediating force that facilitates better or worse alignment of the human or human society towards the regulative ideals of the grand humanist narratives. This modern story is still the one that fits the current discourse about human-centred AI. What has changed, from a “Lyotardian” perspective, is that the orienting principle of human society is no longer an anthropocentric, one of the grand narratives, but rather the metaphysical principle of performativity into which the human is integrated but which does not function as a regulative ideal. Thus, in the post-modern context, “human-centred” means to “better”, i.e. more smoothly and efficiently, align the human to the exigencies of the performative, to the inhuman. A human-centred AI or, more generally, technology does this in the most reliable and painless manner, causing the least resistance, friction, and suffering. This may entail the integration of humanist values as we see in all contemporary forms of ethical guidelines for technology (autonomy, transparency, accountability, some notion of fairness and equality, etc.), but they are no longer regulative. As Lyotard put it in The Post-Modern Condition: within this context, a concern for rights “does not flow from hardship, but from the fact that the alleviation of hardship improves the system’s performance” (Lyotard, 1984, 63).

Lyotard’s second version of the inhuman, stemming from our initial state of corporeal vulnerability – the “miserable and admirable indetermination” – and making us continuously open to novelty, presents another way to understand the idea of human-centredness: human-centred technology holds open the place for the inhuman, understood in terms on the fundamental or core indeterminacy that marks the human condition.

4 From the Paradigm of Domination to Vulnerability

Against the background of Lyotard’s analyses, the question of what it could mean or whether it is possible to design technology in a fashion that is human-centred in Lyotard’s second sense can be readdressed, and perhaps can be taken even a little further: Why are we humans so sure that it is a good thing to make technology human-centric? Both questions can be answered by turning to Lyotard’s analysis. On the one hand, the human in a human-centric technique can be determined negatively: by looking at the inhuman, at the limits of the gesture of superiority of the human conceived within the principle of performativity. Lyotard’s analysis purports to show that it has become increasingly unclear what the human could be specifically beyond better and more productive harmonisation with the principle of performativity or the gesture of negation entailed in the acknowledgement of the above-mentioned debt to childhood. The latter entails the resistance to the essential driving force in performative or efficiency optimising systems. In this sense, what is dubbed the human is constrained to become more machine-like in its functioning. Technology, in a word, threatens to become performatively inhuman when it is used in such a way that the practices of being human are increasingly mechanised, be it in the sense of a (purely) mechanical determination of what constitutes human life as human life, or in what people define for themselves as worth striving for and optimising (Norman, 2024, Norman, 2021).Footnote 14

From the limits of human experience, the inhuman, we gain a different normative perspective on human life. If we put this in terms of the second form of the inhuman, it becomes an exigency for the openness to what remains, within a particular socio-technical system, dimensions of human life that are inexpressible or outside the sensus communis of the performative system. This can be understood in familiar terms as claims to recognition, but not performative integration of, for example, racialised forms of life, the multiplicity of forms of gender and sexual expression, and other forms of life whose expressions are not meaningful within a particular set of cultural and social institutions.Footnote 15 The view of the inhuman then means the widening of the definition, the emphasis on the otherness of human life forms, the strangeness even in what one calls one’s own, and, importantly, an abandonment of the project to create a universal language or genre that is capable of linking all phrases or forms of expression with one another. This perspective of what is human-centric directs the attention away from the paradigm of dominance and control or protection against vulnerability towards a paradigm of openness to the kind of vulnerability attested to in the second form of the inhuman.

What is often meant when talking about human-centric AI is the question of power and how to dominate new technology or how new technology can potentially dominate humans. Are there new forms of cheating? Are robots taking our jobs? Or are there language models that are possibly just as “intelligent” or “conscious” as human beings are? These questions also seem a little surprising because machines are by no means the only form of non-human intelligence that enriches human life or contributes to sense-making in our human worlds. After all, it is more than obvious that monkeys, crows, pigs, and other living beings exhibit forms of intelligence and consciousness without humans feeling seriously threatened by their intelligence.

At the same time, the dominance perspective in the observation and description of non-human life forms allows us to keep these questions at bay in a relatively stable way. The questions are not infrequently linked to the underlying claim of wanting to maintain one’s own position of power and strength. Humans are not worried about the monkey’s, pig’s, or corvid’s intelligence because we do not doubt our capacity to dominate these other intelligences. To avoid misunderstandings: these questions about AI dominating humans are legitimate and important. However, they become distorted when other questions about the recognition of human life forms are not recognised as being at least as relevant: Who is visible with their needs for recognition and who is granted participation rights in social life and decision-making processes, and for what reasons? In other words, we cannot seriously think about the status of machines without at the same time focusing on the varying life forms of human entities regardless of gender, cultural background, work status, or social recognition, and discussing their participation.

Nonetheless, it is important that the notion of preserving the “inhuman in the human” as a recognition of forms of life or vulnerability is not reduced to a question of inclusivity in the terms that Lyotard rejects and that projects like AI and Big Data seem to embrace. This would entail the project of linkability of all phrases or forms of life by rendering them into a universal language or ontology (in the computer science sense of the term). In this case of how we think about AI, this probably means something like treating the AI as a different kind of intelligence, the way we do with crows or slime mould, rather than conceiving it as a human intelligence and attempting to retrofit human thinking into a universal data ontology comprehensible to an AI, thereby erasing precisely those differends or moments of dissensus that testify to the inhuman. The lesson is not to represent AI as an avatar of human intelligence, hence the need to make it human-centred, when in fact what it does so well is to show us something else: that there is a “secret” of human experience which makes it inexhaustible, not fully representable within any particular genre (or data ontology). A serious encounter with corvid or slime mould takes us from seeing them in an analogous manner to our own experience or intelligence to perceiving them as a different form of intelligence that we cannot fully comprehend. The encounter with AI is similar. It opens up something like a differend or dissensus which shows that human experience cannot be integrated into it, as corvid or slime mould intelligence cannot be integrated into our own.

Likewise, this tells us something about the near equation of transparency, trustworthiness, and human-centredness in much of the discourse. The lesson of the inhuman is precisely the lack of transparency of human experience (even to itself), and the impossibility of rendering transparent how the data derived from the experience is being processed by another intelligence: untransparent data going in – untransparent data coming out. Attempts to force and achieve transparency risk creating the injustice of the phrase or experience that simply does not appear within an otherwise transparent system.

As other scholars have pointed out, especially with close linkage to critical theory, (Delanty and Harris, 2021), this is precisely where the use of AI presents a challenge. AIs depict certain social realities at a given point in time, on the basis of what data they have been trained upon. We can learn from Lyotard that with the reference to a human-centredness, there is now the danger that the particular observation of what it means to be human is overgeneralised. This is a problem in that either nothing can be said at all, because it becomes completely unclear what exactly is meant by human here, or there is a focus on only one particular understanding of the diversity of human life forms.

Hence, the use of AI comes with the big caveat that it risks perpetuating structural injustices (Young, 2008) both as a result of bias or under-representativeness in training data, and by the attempt to overcome under-representativeness through a universal system of linkages that ignores differends in the phrases or experiences from which the data is derived (Waelen, 2022). These two issues are not entirely distinct from one another. The invisibility of marginalised groups is due to underrepresentation in the data and data collection, as well as underrepresentation in the pattern recognition of language models and decision support systems. However, underrepresentation is not just a question of needing more or better data but of being unable to establish linkages with or between certain phrases (i.e. not to be able to datify them or render their meaning intelligible to machine learning or intelligence) due to an existing differend or dissensus and the lack of a universal genre of discourse to regulate these conflicts. A constructive critique of (in)human-centredness in AI development seeks to find ways to accentuate the visibility of the differend or dissensus itself; but not to resolve it through integration of the form of experience into the data processing of the AI system.

Human-centredness without a strict focus on the inhuman according to the second understanding of Lyotard can fail to recognise and balance these claims in a way that is mindful of the interdependence, opacity, and vulnerability of human life (Braun, 2020). At the same time, the inhuman fulfils the function of a counter-narrative by negating, questioning, and reasserting what is considered to be a given. This is precisely where the connection to a critical perspective and the benefit of such a perspective for the debates on HCAI lies. On the one hand, it questions the hubris with which it is assumed that human subjects can always make better decisions than technical artefacts. On the other hand, it also focuses on the question of which economic and social power structures ensure that these very narratives of superiority repeatedly create new experiences of vulnerability, in particular for marginalised groups.

5 New Perspectives on (In-) Human-centred AI

The necessary openness of the question of what constitutes the human of the human, indicated by the figurations of the inhuman, also shows a high overlap with the question of the commonality of the different principles and criteria of AI ethics. There are indeed many proposals for defining principles for a responsible approach to AI, which also have a high overlap across different cultures and schools of thought (Jobin et al., 2019, Gal, 2020). At the same time, however, the question arises as to what is the common moral denominator of these principles. How crucial such a common denominator is is shown not least by the historical and current experiences of injustice of many marginalised groups. In medical ethics, this insight has led to the fact that whenever there is a conflict between principles, there are strong arguments for a priority rule of self-determination over care, for example (Mittelstadt, 2019). So far, there is no such definition of a common orientation towards a target norm for the field of AI ethics (Munn, 2022). A human-centredness that is aware of the fundamental vulnerability of human life and stands in the way of any experience of structurally applied harm could make an important contribution here. The path urged by Lyotard in his work on the inhuman and on language is to warn precisely against the quest for a “universal genre” to regulate conflicts between phrases or genres, e.g. that of self-determination vs. care, in favour of an ethics or politics that witnesses these conflicts. This points us back to the site of vulnerability or of childish stupor as the point where such a conflict can be witnessed.

Understanding vulnerability as such a basic experience of human life requires at the same time to determine more precisely what could be meant by such vulnerability. Here, two levels of vulnerability must be distinguished even though they are intertwined.

Firstly, there is vulnerability in the sense that the constitution of human integrity and identity does not begin with oneself: Humans (as well as other mammals) are born, do not choose their names themselves, see themselves through the gaze of others, and have experiences that shape how they enact and narrate themselves. These basic experiences can be understood at a very fundamental level that many human and non-human animals share. Phenomenological as well as feminist approaches have repeatedly pointed out that it is these experiences that fundamentally place living beings in relations of care. For the description of this fundamental experience of vulnerability, the second level of Lyotard’s description of the inhuman can help: Human life is multifaceted and cannot be pinned down to a particular experience but remains vulnerable in that it comes to (self-)recognition in multifaceted experiences that never permit full identification with or by oneself. At this point, there is a high degree of convergence with Miranda Fricker’s work on epistemic injustice (Cartlidge, 2022): on the one hand, in a methodological sense, namely, in the question of the vantage point from which one looks at the processes of human life. Fricker writes: “The requirements of what we might call epistemic justice are surely many and various. But we can begin to get the measure of it by looking first to basic kinds of epistemic injustice, whose negative imprint reveals the form of the positive value. As a general point of philosophical method, I believe that taking failure as one’s starting point is a good strategy.” (Fricker, 2013, 1318) The aim is to look at the figuration of the formation of knowledge: What do we define as properties, capacities, and conditions of human life, and on what grounds (Fricker, 2012)? HCAI would then be a perspective that systematically examines from the margins of human life forms where experiences of human life are at risk of being excluded.

The second level of vulnerability refers to claims for respect for the freedom and self-determination of human bodies that arise in the specific modes of experience of bodies and, above all, in the respective responses to these experiences. Judith Butler has pointed out in her work that this is precisely where the precariousness of the realisation of freedom lies. The first level of vulnerability points to a vulnerability that cannot be avoided, the recognition of which is an essential task in order to do justice to human life in its multiplicity of life forms. The second level of vulnerability refers to vulnerability in terms of bodily precariousness (Butler, 2006, Butler, 2015). By precarious we mean that claims to self-determination are not respected (Bleher and Braun, 2022, Braun et al., 2021) and are harmed (Unruh, 2022). At this point, the questions Lyotard raises in the first sense of the inhuman come to the table: a precise disclosure of power and dominance relations is needed in order to be able to determine at what point claims to visibility and recognition are not recognised and how this renders bodies vulnerable or precarious. It is therefore a matter of avoiding harm in the sense of social recognition. Wherever human life is narrowed down to certain mechanisms, modes of operation, or optimisation claims, and freedom is thereby restricted, the inhuman comes into play in its critical function (Young, 1981). In this function, the lens of the inhuman can help to uncover restrictions on the realisation of freedom and to break through power asymmetries. From this perspective of the inhuman, the reference to HCAI then first calls for an examination of what forms of discrimination and marginalisation are inscribed in the underlying data and detected patterns. And secondly, it critically appeals whenever the use of predictions based on these patterns makes the future of certain groups worse and violates their rights in this sense (Mohamed et al., 2020). An (in)human-centred AI is one that remains attentive to both these forms of vulnerability.

6 Conclusion

We have argued that the current discourse on HCAI remains conceptually underdetermined on two levels. Firstly, on the epistemic level: What are the experiences and criteria we use to determine what constitutes a good human life? And, secondly, on the level of social recognition: Which forms of power lead to the disregard of which claims to respect for self-determination? What is clear, we argue, is that the current discourses of HCAI are situated within the sphere of economic rationality. HCAI focuses on the optimisation of human-AI relations in a manner that is constrained to economic rationality. This holds to a large extent also for principle or rights-based approaches where the claims of rights protect against human vulnerability of over-exploitation or exclusion within and from the economic sphere. Lyotard’s figure of the inhuman, as it fits with his thinking about the epistemic transformation that marks the post-modern and the philosophy of the differend, helps to characterise both a negative critique of HCAI and a positive one. The former is in terms of optimisation of the human through AI towards increased performativity of the socio-technical system. The latter is in terms of attentiveness to two forms of vulnerability revealed in Lyotard’s account. First, a sense of vulnerability linked to human corporeality and the inhuman as a secret or hidden remainder of the human that it is now the task of philosophy or ethics to accentuate. Second, as vulnerability to an increasing systemic constraint on the possibilities for human expression if these are not meaningful to data processing within a system optimised for increased performativity. Both senses refer us to ongoing discussions of epistemic injustice and precarity.

Lyotard’s analysis enriches these discussions by providing conceptual resources to argue that the aim of human-centredness is not to integrate the excluded phrases or even to recognise them within the performative system, but rather to identify and accentuate these points of tensions. Likewise, the aim of human-centredness is not to protect the human from precarity within a performative system, but to acknowledge the initial form of corporal precarity, identified in the analysis of the inhuman, as a source of novelty that can potentially undermine the principle of performativity as the driver of contemporary socio-technical systems. Understood in this way, HCAI is not another vague concept on the table of policy-making. Instead, insofar as it demands an attention to forms of vulnerability as its guide, HCAI makes a strong case for and is, at the same time, a promising method for the responsible and just use of AI.