1 Introduction

Of the many long-lasting impacts of the COVID-19 global pandemic, it may well turn out that one of the most significant was the rapid acceleration of digitalisation and automation within healthcare systems (Mhlanga 2022; Islam et al. 2021). The speed by which people particularly in richer countries became familiar with new technologies in their healthcare was driven by two developments made politically possible via the pandemic. First, lockdowns and the demand for social distancing and self-isolation meant that patients were either prevented or strongly discouraged from attending primary healthcare clinics in person. In some countries consultations moved online, intersecting with broader developments in which there is growing use of both privately and tax-funded online consultations. Second, the absence of physical access to clinics shifted attention to the growing availability of health apps. Even in countries like Sweden with public healthcare systems many patients were encouraged by their doctors to use privately-owned for-profit health apps as an at-home alternative (Savage 2021). The article sees digitalisation and automation as separate but overlapping phenomena which, respectively, refer to the use of digital interfaces and an automation of interactions and decisions otherwise undertaken by humans. The digital automation of health has, as the article shows, become a catalyst for wider political acceptance of artificial intelligence-based technologies within societally sensitive domains.

This article looks at the growing acceptance of artificial intelligence through discourse around the development of digital automation in health as it has unfolded within three key institutions of global governance: the United Nations High Commissioner for Human Rights (OHCHR), the World Health Organization (WHO) and the United Nations Educational, Scientific and Cultural Organization (UNESCO). These three organizations are United Nations (UN) bodies that all produced policy documents on AI before, during and after the pandemic, as well as being significant in global policy developments around healthcare. Focusing on them allows us to explore the changing narratives on artificial intelligence as communicated through the example of digital automation in healthcare over time and gage emerging trends within the norms used to regulate human society. Policy documents from all three produced between 2014 and 2021 represent a rapid transformation in the discursive framing of artificial intelligence. This is a shift from AI being seen as a problematic to it being framed as a remedy to address some of the most pressing problems in healthcare. Important to this narrative development is the role played by discourses of ‘crisis’. Among the conditions identified as enabling major political and policy shifts is that key actors collectively frame that the established approach is experiencing a moment of crisis (Bacchi 2009). To be clear, this does not mean focusing on the material effects of historical crises but, rather, the political impact of moments in which there is a dominant narrative framing events as a ‘crisis’.

During the height of the pandemic the most prominent crisis narrative was COVID-19. Mass media discussion of the pandemic frequently referred to technology as the solution by which national healthcare systems and individuals might better manage the spread of the virus. The policy discourse of the OHCHR, WHO and UNESCO provides a means by which to study that narrative within an institutional setting where its political impact can be traced. By looking at documents from 2014 to 2021 the analysis contextualizes the shift most evident in the height of the pandemic period.

Since the height of the pandemic the focus on digitalisation as a solution has continued, buoyed up by a continuing crisis narrative that expanded to encapsulate a wider story of ‘crisis’ within the social healthcare model. For example, a recent book written by the National Director of AI for Health & Life Sciences at Microsoft, Tom Lawry, articulates its central message in its title and sub-title: ‘Hacking Healthcare—How AI and the Intelligence Revolution Will Reboot an Ailing System’ (Lawry 2022). That same narrative has spread across multiple fora and publications, including within national AI strategies (Author 2; Authors 1 and 2, forthcoming) with speakers and writers articulating an increasingly hegemonic narrative that AI is not only a good idea but also an urgent necessity to cure a healthcare system in crisis. For example, several of the main events at WHO’s 2022 World Health Congress were dedicated to the topic.

The digitalisation and the automation of health cover a broad array of technologies used in preventative healthcare, such as self-tracking apps (Ruckenstein and Dow Schüll 2017) and lifestyle guidance (Rabinowitz et al. 2022), to clinical support for consultations (Nadarzynski et al. 2020) and diagnoses (Elemento et al. 2021), as well as general logistics (Ageron et al. 2018). Societally and politically the term ‘digitalisation’ is an empty signifier (Laclau and Mouffe 2001) linking multiple and otherwise distinct practices (i.e., smartphone apps, logistics management, diagnostics) within a temporary relationship of equivalence collectively opposed against a shared ‘other’. That other is not necessarily as uninteresting as just ‘non-digital healthcare’ but, rather, whatever is articulated within the social–historical context as representing that which is outside the equivalential logic within ‘digital health’. That might be ‘ill health’, but also in other cases it could be more specific such as ‘COVID-19’ or ‘inefficiency’. Consequently, the article does not claim that the term ‘digitalisation’ as used within the discourse it traces necessarily refers to a specific healthcare technology with respect to its technical design and workings. What matters for the analysis is that the joint phenomenon of digitalisation and automation in healthcare has come to acquire a particular series of meanings within a narrative that, in turn, has real-world impacts by changing the conditions of legitimacy and, in this case, ‘good healthcare’ within global policy discourse. The analysis presented is important because it helps us understand the changing societal understandings impacting how we seek to achieve optimal outcomes for human health as well as the appropriate social or market models for that undertaking.

1.1 Rethinking the technology–regulatory nexus

Governance developments in relation to digitalization and the increasing use of technologies loosely coupled under the label ‘artificial intelligence’ are often viewed as reactive and therefore largely determined by that technology (Hoffmann-Riem 2020). A rapidly growing consensus in which these technologies are perceived as having profound—both bad and good—social, political, and economic impacts has emerged within the institutions of global governance (Erman and Furendal 2022). As Schmitt describes it, it appears as if technological developments have ‘triggered a frenzy of regulatory initiatives at various levels of government and the private sector” (2021:1). While not questioning the need to regulate such technologies so that they benefit society and potential risks may be mitigated, our analysis questions the assumption that regulation is only reactive to such technological developments. A technological deterministic view of regulation is problematic because it ignores the role played by legislative norms in structuring the social and economic relations constituting that technology (Gervais 2021; Nordström 2021). Rather, the argument that regulatory developments ‘must’ respond to technology needs to be unpicked as part of a particular narrative imbued with power relations regarding what should be governed, by whom, and to what purpose (Firlej and Taeihagh 2021). None of the three UN bodies studied here was created to produce governance norms intended for application to data technologies and yet, as will be shown in the analysis, they have nevertheless accepted that role within their already demanding and often under-resourced mandate. How did that change come about?

It is a well-established observation in the policy sciences that one of the conditions for a regulatory institution to expand its jurisdiction is a perceived moment of ‘crisis’ (March and Olsen 2006). Since at least the mid-1980s, the Policy Sciences have been aware that moments in politics narrated as ‘crises’ often provide the basis for the acceptance of new policies, even where those initiatives have very little relevance to the supposed problem at hand (Keohane 2002; Kingdon 1995; Bacchi 2009; Zahariadis 2014; Boin et al. 2007). For the institution concerned, ‘crisis’ narratives serve to signal changes in policy direction. However, ‘crisis’ can also be treated as an idea itself that some actors may well promote in the hope of creating the conditions conducive to their preferred ‘solution’ (Hall and Taylor 1996; March and Olsen 2006). This can be witnessed in moments of paradigm shift (Hall 1993; Sabatier 1993), and features predominantly within the history of institutional change (Capoccia and Kelemen 2007) and economic transformation in which Keynesian welfare economics was successfully problematised and replaced with the present neoliberal hegemony in the late 1970s (Blyth 2002). For that reason, moments of crisis are not treated as exogenous phenomena forcing change but, rather, understood as political discourses endogenous to those institutions.

In the context of political institutions, a discourse of ‘crisis’ not only helps give new audience to calls for reform, sometimes it can also support the case for strengthening the institution’s competencies by justifying new momentum within policy learning (Radaelli and Schmidt 2004:189). For example, this is well documented within the multilateral trade regime that owes its expansion–transforming from the General Agreement on Tariffs and Trade, into the World Trade Organization–on a series of ‘crisis’ moments (Wilkinson 2009). In that case, the significant increase in the institution’s competencies took place in a context in which politicians and media commentators frequently voiced concern that the ‘crisis’ signaled the institution’s demise (Strange 2014). Crisis narratives are what Schmidt has labeled ‘doomsday scenarios’, that generate political pressure for policy change (2008:309). In hindsight, expressions of ‘crisis’ may well serve to ultimately strengthen rather than weaken an institution. Actors seeking to maintain the institution can utilize a ‘last chance’ frame to enable their demands, while dissenting voices are placed under intensified pressure to be quiet. Approaching ‘crisis’ as a narrative allows research to step back and ask both how that frame has been formed, but to also study what other frames or political demands it facilitates. The importance of crisis narratives is therefore considered in the article’s analysis of the shifting perception of healthcare’s digital automation within OHCHR, WHO, and UNESCO, as outlined next.

2 Methods

The analysis traces a shift in the framing of AI governance within OHCHR, WHO, and UNESCO through policy documents produced separately by the organizations. The relevant documents were identified by searching through the organizational databases, with those related to AI and healthcare being selected. The analysis includes several documents in which healthcare plays only a small role because, nevertheless, those texts provide a means by which to trace emerging narratives around the role of digital automation in healthcare. The earliest example found was from 2014 and research included all relevant publications up to December 2021. Table 1 shows the documents included in this research.

Table 1 List of policy documents analyzed

Qualitative comparative content analysis of the documents was undertaken. Initially all references to health or healthcare were coded in NVivo. The key words included ‘medicine’, ‘health(care)’, ‘disease’, ‘illness’, ‘medical’, ‘diagnosis’, ‘cancer’, ‘nurse(s)’, ‘physician(s)’, ‘doctor(s)’, ‘patient(s)’ and ‘pandemic’. Further to this, references to ‘crisis’ were also coded. These were not always ‘crises’ as explicitly noted in the text, nor was it limited to the COVID-19 ‘crisis’. Rather in line with the understanding put forward by Hall and Taylor (1996) and March and Olsen (2006), ‘crisis’ was understood as ideas and events that were used to create the conditions for the promotion of their preferred ‘solutions’, namely the organizations’ claim for the need to govern AI. Doing so allowed for reflection on the ‘crises’ narratives that were drawn on over time by the organization as well as an analysis of the relationship between ‘crisis’, health, and the justifications for the need to govern the digitalisation and automation of healthcare. To be clear, in keeping with the above theoretical discussion on the role of ‘crisis’ narratives in politics, the analysis was not focused on identifying objective crises but, rather, wherever the documents identified a crisis narrative. While AI was referred to generally by all three institutions, at times specific examples of AI applications were provided. This included machine learning and contact tracing (OHCHR), disease surveillance (WHO), robotics, diagnosis, and chatbots (UNESCO).

The paper does not claim to map the global governance of AI and health. Further to this, it does not claim to provide a comprehensive analysis of AI and health in the publications of the OHCHR, WHO, or UNESCO, nor is it an intellectual history of the role of crisis narratives in the policies of these organizations. It should be understood that crisis is not treated as a fact separate to the institutions but, rather, a political discourse serving to legitimate a governance shift. As such, the analysis is directed specifically to tracing the changing governance norms concerning the role of AI within healthcare and the role of crisis narratives in that shift.

2.1 Findings

To best reflect the developing narratives of crisis within the three organizations, each of them is taken one at a time. The policy documents for the three organizations are presented in chronological order according to their date of publication. We begin with OHCHR, as they were the first to produce a policy document on AI in 2014. Following this we turn to WHO, whose first policy document was published in 2018. Finally, UNESCO is examined, as they were the last to produce such a document, being published in 2019.

2.1.1 United Nations high commissioner for human rights (OHCHR)

Resolution 68/167 on the right to privacy in a digital age was adopted by the United Nations General Assembly in December 2013. The Resolution included a request that OHCHR regularly submit an updated document titled The Right to Privacy in a Digital Age Report to the General Assembly to allow them to monitor the situation, which OHCHR did in 2014, 2018, 2020, and 2021. Resolution 68/167 was a response to growing concerns regarding the increasing scale and scope of digital State surveillance and its impact on the right to privacy. The tipping point where these concerns became a ‘crisis’ was noted in the first Right to Privacy in the Digital Age Report in 2014, namely the:

… Revelations in 2013 and 2014 that suggested that, together, the National Security Agency in the United States of America and General Communications Headquarters in the United Kingdom of Great Britain and Northern Ireland have developed technologies allowing access to much global internet traffic, calling records in the United States, individuals’ electronic address books and huge volumes of other digital communications content.” (OHCHR 2014, 3).

The 2014 Right to Privacy in the Digital Age Report health played a very small role, with the only reference being “Other rights, such as the right to health, may also be affected by digital surveillance practices, for example where an individual refrains from seeking or communicating sensitive health-related information for fear that his or her anonymity may be compromised” (OHCHR 2014, 5). However, like the Preliminary Study on the Ethics of Artificial Intelligence (UNESCO 2019b) the crisis had shifted by the publication OHCHR’s (2018) Right to Privacy in the Digital Age Report. The opening paragraph reflecting this shift from a ‘crisis’ arising from digital State surveillance to a crisis in the growing power of certain private sector interests:

Driven mostly by the private sector, digital technologies that continually exploit data linked to people’s lives, are progressively penetrating the social, cultural, economic, and political fabric of modern societies. Increasingly powerful data-intensive technologies, such as big data and artificial intelligence, threaten to create an intrusive digital environment in which both States and business enterprises are able to conduct surveillance, analyse, predict and even manipulate people’s behavior to an unprecedented degree… (OHCHR 2018, 2).

Again, health was only referred to minimally, with regard to the growing digital footprint of people, which included health data, and the danger of ranking or scoring people based on profiles and how this could reduce access to healthcare, as well as insurance and financial services (OHCHR 2018).

In the 2020 Impact of New Technologies on the Promotion and Protection of Human Rights in the Context of Assemblies, Including Peaceful Protests (OHCHR 2020), we begin to see the COVID-19 pandemic being introduced as a ‘crisis’. It was framed as a ‘crisis’ that may exacerbate a range of negative impacts on digital technology in the context of assemblies and protests. Health moved from playing a minimal role, to one that was central in the OHCHR’s justification for governance of AI. OHCHR (2020, 2 emphasis added) states that:

The year 2019 was momentous, with protests taking place in many countries in all regions. That discontent has continued in 2020. The factors causing people to protest were, and continue to be, complex and varied. Structural and institutional racial discrimination, worsening socioeconomic conditions, corruption, inequality and the denial of other human rights were some of the common root causes. Many of these concerns lie at the core of the 2030 Agenda for Sustainable Development and may have been exacerbated by the coronavirus disease (COVID-19) crisis.

The 2021 Right to Privacy in the Digital Age Report State surveillance and the power of the private sector are downplayed, instead the need to counter the potential ‘crisis’ stemming from AI systems themselves is used to justify their governance agenda:

No other technological development of recent years has captured the public imagination more than artificial intelligence (AI), in particular machine-learning technologies. Indeed, these technologies can be a tremendous force for good, helping societies overcome some of the great challenges of the current time. However, these technologies can also have negative, even catastrophic, effects if deployed without sufficient regard to their impact on human rights. (OHCHR 2021, 2 emphasis added).

The COVID-19 pandemic is also drawn upon in a similar manner as it was used in Impact of New Technologies on the Promotion and Protection of Human Rights in the Context of Assemblies, Including Peaceful Protests (OHCHR 2020). The 2021 Right to Privacy in the Digital Age Report notes that:

While the present report does not focus on the coronavirus disease (COVID-19) pandemic, the ongoing global health crisis provides a powerful and highly visible example of the speed, scale and impact of AI in diverse spheres of life across the globe. Contact-tracing systems using multiple types of data (geolocation, credit card, transport system, health and demographic) and information about personal networks have been used to track the spread of the disease. AI systems have been used to flag individuals as potentially infected or infectious, requiring them to isolate or to quarantine (OHCHR 2021, 2 emphasis added).

While OHCHR has not omitted its initial problematisation of AI as a threat to the right of privacy, that framing has become tempered by a stronger focus on automation within healthcare as a solution and inevitable development. OHCHR’s self-identity as a human rights body has shifted from being potentially contrary to AI. OHCHR’s 2021 report still warns that AI poses a threat to privacy through data collection yet that negative frame is slightly more diluted where the implementation of AI in the form of automation in healthcare has become positioned alongside OHCHR’s self-identity as a human rights body. As outlined next, despite the earlier reports of OHCHR, WHO was slow to engage directly within discussion on the application of AI within Healthcare.

2.1.2 World Health Organization (WHO)

It was only in 2018 that WHO published its report on a meeting held in 2017 on Big Data and Artificial Intelligence for Achieving Universal Health Coverage: An International Consultation on Ethics Meeting (WHO 2018a). The purpose of the meeting, a consultation with key actors and experts, was to address the expansion of public and private data, its use and how “Big Data” and AI are already impacting people's lives (WHO 2018a, VI). The report opened by explaining the justification for the need to focus on these governance questions, including issues of—consent, privacy, confidentiality and security, the governance of data, WHO’s stewardship of data, who are appropriate users of health data, decision-making and policy recommendations based on probabilistic and imperfect data (WHO 2018a, VI). Several pages later, the report states that, in relation to WHO’s strategic goals on universal healthcare, better protection of health emergencies and improved health and well-being “artificial intelligence is playing an increasing role in disease surveillance and our defenses against outbreaks” (WHO 2018a, VII).

The Use of Appropriate Digital Technologies for Public Health Report by the Director-General of WHO in 2018 (WHO 2018b) was, generally, very positive about AI and health and its potential to work toward positive health outcomes in contrast to WHO (2018a). It documented the potential of digital technologies (including AI) to help in working toward the Sustainable Development Goals 2030 where these relate to health. Disease surveillance was given as an example, involving “gathering information and data on epidemics and health indicators directly from affected populations or other stakeholders, through approaches such as “crowdsourcing” or “community reporting” (WHO 2018b, 3). In general, the two reports in 2018 framed WHO’s role as optimizing the opportunities of AI in health while minimizing the risks. Comparing WHO to OHCHR, the organization was therefore earlier in framing AI’s capacity for surveillance as a ‘good’ thing in the context of digital automated healthcare.

By the time WHO published the Ethics and governance of artificial intelligence for health: Guidance 2021 (WHO 2021) (hereafter WHO Guidance 2021), we can see a significant change from their previous approach. WHO (2021) draw on a ‘crisis’ narrative to justify their role in the governance of AI in health.

While previous disease outbreak and epidemic management were not significant features of the perceived potential of AI, it featured very prominently in the WHO Guidance 2021, being tied with the COVID-19 ‘crisis’ (WHO 2021). In the Executive Summary, WHO Guidance 2021 states:

AI for health has been affected by the COVID-19 pandemic. Although the pandemic is not a focus of this report, it has illustrated the opportunities and challenges associated with AI for health. Numerous new applications have emerged for responding to the pandemic, while other applications have been found to be ineffective. Several applications have raised ethical concerns in relation to surveillance, infringement on the rights of privacy and autonomy, health and social inequity and the conditions necessary for trust and legitimate uses of data-intensive applications (WHO 2021, xii).

While the role of WHO in managing the positive and negative impacts of AI in health did not change significantly compared to the earlier WHO reports considered here, the narrative drawn on to justify it did, namely the COVID-19 ‘crisis'. Shortly after the above quotation the need for the greater regulation of AI in health due to the COVID-19 ‘crisis’ was made explicit:

With the rapid proliferation and evolving uses of AI for health care, including in response to the COVID-19 pandemic, government agencies, academic institutions, foundations, nongovernmental organizations, and national ethics committees are defining how governments and other entities should use and regulate such technologies effectively (WHO 2021, 2).

The COVID-19 pandemic was the first of several crises utilized to justify the role of WHO in the governance of AI and health. After referring to the COVID-19 pandemic, WHO (2021) then referenced other crises that it had not previously mentioned (WHO 2018a; b), such as State surveillance, big tech companies, and their increasing power, as well as, to a lesser extent, the climate crisis. In the short time, WHO has been considering AI, it has quickly established a narrative in which the digital automation of healthcare is a hugely positive development, but that process requires its governance role to facilitate that potential. That same narrative can be seen in UNESCO which, despite not being a health agency, was early to associate health with AI and which rapidly expanded with the pandemic, as below.

2.1.3 United Nations educational, scientific and cultural organization (UNESCO)

In 2019, UNESCO set out their case for better governing the relationship between AI and health in the report Steering AI and Advanced ICTs for Knowledge Societies: A Rights, Openness, Access, and Multi-stakeholder Perspective (UNESCO 2019a). Good practice examples of how AI applications have been adopted in healthcare in Africa were included in the document. There were references to AI applications being used in diagnostics, for chatbots to increase access and information dissemination as well as services to allow people to locate healthcare facilities or actors (UNESCO 2019a). In addition, there was a call for private sector actors to “Create AI technologies to solve issues related to health, agriculture, finance, transportation, etc.” (UNESCO 2019a, 162 emphasis added). However, health played only a marginal role in the report.

In terms of the main justifications for the need for greater regulation of AI, the growth of mass surveillance, profiling, and the private and public actors misuse of data (health data being one aspect of this) was given. The focus on the dangers of state surveillance and the lack of trust between individuals and private and public actors in UNESCO’s (2019a) report relates to a previous ‘crisis’. This was the ‘crisis’ related to both Wikileaks uploading a vast number of private communications in 2012 and the revelations about the National Security Agency (among others) collecting meta data on ordinary citizens being brought to light by Edward Snowden (Bauman et al. 2014).

In February 2019 UNESCO’s advisory body, the World Commission on the Ethics of Scientific Knowledge and Technology (COMEST) also published their Preliminary Study on the Ethics of Artificial Intelligence (UNESCO 2019b). Health played a more prominent role in the UNESCO (2019b) report as compared to the other document (UNESCO 2019a). The section titled “AI, life sciences and health”, UNESCO (2019b, 11) set out the main ethical challenges for the use of AI technologies in healthcare, discussing the advantages and disadvantages of these. This included the use of robotics for surgery and caring roles, their cost, issues of transparency and autonomy (ibid, 11). Following this, the use of medical websites to self-diagnose and how this can undermine the doctor–patient relationship and medical authority, self-medication, chatbots, social robots, and the ethical issues around human enhancement using AI were discussed (ibid, 11–12).

The focus was on the specific application of AI in health and the ethical challenges that arose around these applications. These were not used as justifications for UNESCO’s governance role in AI. Yet, the study began by detailing the need for regulation due to the ability of “multinational tech companies” to collect metadata from “billions of people”, combined with these actors increasing use of AI and computing power meaning that, “via their products, AI is rapidly gaining influence in people’s daily lives and in professional fields like healthcare, education, scientific research, communications, transportation, security, and art” (UNESCO 2019a, 3). The issue of State surveillance was not included in the ‘crisis’ narrative. Instead, the justification for UNESCO’s governance role was framed as being a response to States demanding this regulation given the “profound social implications” resulting from the increasing power of multinational tech companies, a ‘crisis’ of power (UNESCO 2019b, 3).

When UNESCO’s (2019b) report was submitted by the Director-General of UNESCO to the Executive Board as the Preliminary Study on the Technical and Legal Aspects Relating to the Desirability of a Standard-setting Instrument on the Ethics of Artificial Intelligence (UNESCO 2019c), little had changed about how AI and health were discussed. However, the narrative of the ‘crisis’ of power related to the multinational tech companies has been watered down, instead algorithmic bias was used and not discussed as a ‘crisis’.

In March 2020, WHO classified the COVID-19 outbreak as a pandemic. It was within this context that UNESCO published its First Draft of the Recommendation on the Ethics of Artificial Intelligence in September 2020, (UNESCO 2020) (hereafter the 2020 Recommendations). Health was designated as a high-risk domain where AI was applied (alongside law enforcement, security, education, and recruitment) and was covered in much greater detail than in the previous documents. Indeed, one of the ten policy areas in the 2020 Recommendation was titled Health and Social Wellbeing (UNESCO 2020). The first paragraph of this policy area began by stating:

Member States should endeavour to employ effective AI systems for improving human health and protecting the right to life, while building and maintaining international solidarity to tackle global health risks and uncertainties, and ensure that their deployment of AI systems in health care be consistent with international law and international human rights law, standards and principles. Member States should ensure that actors involved in health care AI systems take into consideration the importance of a patient’s relationships with their family and with health care staff (UNESCO 2020, 20 emphasis added).

UNESCO’s (2020, 2) introduction to the use of AI systems to “tackle global health risk and uncertainness” reflects the growing acknowledgment of the COVID-19 pandemic and the role AI applications were playing in the ‘crisis’. In the 2020 recommendations (UNESCO 2020), we see health playing a much broader role in justifying the need for the governance of AI more generally than in UNESCO (2019a, b). For example, within this policy area, there is a strong emphasis on mental health, the engagement of children and youth in shaping the future landscape of AI, the dangers of cultural and societal impacts of AI systems, behavior and habit modification, the need to counter hate speech and disinformation online, ensuring the media have the space and resources to report on AI systems, as well as use AI systems themselves (UNESCO 2020).

On 24 November 2021, the recommendation on the Ethics of Artificial Intelligence was adopted by UNESCO’s General Conference (UNESCO 2021). Little had changed from the first draft. However, there was the addition of a reference to “the mitigation of disease outbreaks”, a further nod to the COVID-19 ‘crisis’ in the opening paragraph of the Health and Social Wellbeing policy area (now number eleven of ten) (UNESCO 2021, 23 emphasis added). While most of the themes in the Health and Social Wellbeing policy area remained the same between 2020 and 2021, there were some changes. The inclusion of tackling hate speech, disinformation, and the role of the media was removed from Health and Social Wellbeing, with it being included in a newly created policy area on Communication and Information.

For UNESCO, healthcare has served as an example of AI’s potential to improve human well-being in line with its own governance remit. However, as with OHCHR, data-intensive technologies were for a long time problematised. Various data and privacy violation scandals had created a crisis narrative in which AI was a threat. Yet, UNESCO has also held a place for digital automation within healthcare. That earlier crisis has shifted, but in the context of the crisis around the pandemic, UNESCO has adopted a governance position in which, as with OHCHR, its role is to help facilitate the growth and implementation of AI for ‘good’ along the lines of automated healthcare.

3 Discussion

The analysis has not directly engaged with defining artificial intelligence as a technology but, rather, as an empty signifier used to refer to a broad and changing configuration of social practices and phenomena. As documents produced by key institutions of global governance, the texts analyzed provide a means to trace the changing discourse among global governance actors. Rather than being only reactive to technological developments, the regulatory discourse is seen as part of a broader political discourse altering the conditions of possibility and legitimacy for artificial intelligence. In that understanding, the analysis identified two main findings. First, the role of ‘crisis’ moments in necessitating a governance response to AI. And, second, how ‘crisis’ alters how the framing of problems and the legitimate governance response are articulated.

First, with regard to the use of ‘crisis’, WHO (2021) used the COVID-19 ‘crisis’ to justify their role in the governance of AI in a huge range and breadth of applications to health and beyond throughout the WHO (2021) document. Initially, before 2021, WHO’s policy on the governance of AI in health did not draw on ‘crisis’ narratives. However, COVID-19, a crisis stemming from health, provided an opportunity to utilize the ‘crisis’ to seek to strengthen WHO’s organization’s competencies and to create the conditions within which to propose their solutions. This is in line with the role of ‘crisis’ as proposed by Radaelli and Schmidt (2004), Hall and Taylor (1996) and March and Olsen (2006). WHO also embraced a range of other ‘crises’ which reinforce this position, ‘crisis’ that can be found in the UNESCO and OHCHR documents.

For UNESCO and OHCHR, on the other hand, the regulation of AI in health only played a minimal role for these organizations’ positions on their emerging governance role. However, this varied slightly, with OHCHR referring to health more frequently than UNESCO in relation to state surveillance and the threat posed by multinational tech companies. The utilization of the COVID-19 ‘crisis’ saw the increase in the prominence of health in UNESCO and OHCHR documents produced during the ‘crisis’. For UNESCO, we can also see the expansion, and then narrowing down of the use of health as a justification for their governance role between 2020 and 2021 (UNESCO 2020, 2021).

Second, the relationship between AI and the ‘crisis’ narratives shifted during the COVID-19 ‘crisis’. Previously, WHO saw their role as managing the opportunities and harm caused using AI applications in health. AI was framed as largely positive for health, if governed appropriately. OHCHR and UNESCO also highlighted this tension throughout their documents, initially in relation to the threat surveillance by the state and multinational tech companies.

However, this shifted for all three organizations with the COVID-19 ‘crisis’. While AI applications were still noted as being a threat, this was in relation to other former ‘crisis’. In relation to the COVID-19 ‘crisis’, AI was framed as a remedy. The justification for the three organizations’ governance role shifted to one whereby they need to facilitate the use of AI applications. While all three organizations have maintained a critical position on aspects of AI, in the context of digital automated in healthcare that has largely shifted to a positive narrative in which their governance role is as a facilitator. The analysis does not question the potential of such technologies or claim an evaluation of their real-world application. Rather, within the terms of this analysis, the findings only go to trace a discursive shift in the relationship these three core institutions of global governance have to artificial intelligence and the role narratives around digital automation in healthcare have played due to, in particular, the COVID-19 crisis. That there has been more talk of ‘healthcare’ during the pandemic does not come as a surprise but, nevertheless, the texts analyzed are governance reports with longer-term visions that exceed that immediate pandemic. The substantive issues leading to concern around artificial intelligence have not altered and, given the acceleration of technological development, could be said to justify even greater concern. Yet, by shifting over to a narrative around digital automation in healthcare as a positive case for artificial intelligence, the regulatory discourse is turning away from questions of unequal power relations in the technology’s development to, rather, supporting the current economic model behind it.

4 Conclusion

The analysis of the shifting discourse on AI within OHCHR, UNESCO, and WHO illustrates a broader phenomenon in which the institutions of global governance are finding their role with respect to the rapidly expanding use of artificial intelligence within the politically sensitive domain of human healthcare. Crisis narratives have played a fundamental part in justifying their emerging role. Healthcare and its digital automation have served as important devices by which to shift the crisis narrative from, for OHCHR and UNESCO, concern over the surveillance power of AI to, rather, the fear that AI’s benefits might not be realized if those agencies do not engage further. This health narrative has been the strongest for WHO which, having initially been slow to take a position on AI, has in the context of COVID-19 become an enthusiastic promoter of the technology as a solution to the pandemic crisis.

This shift reflects a wider discourse within the corporate world with technology firms using the pandemic as a ‘crisis’ narrative to justify a rapid acceleration of AI’s role in healthcare (e.g., Lawry 2022). The article does not question the utility of such technologies to helping various aspects of healthcare but, rather, argues the need to trace the narratives being used to justify what is a fundamental shift not only in the technology used by health carers but also, given the ownership of that technology and its acquisition of highly sensitive data, the relative role of the state and market actors. Important to that shift is a move away from privacy protections to, rather, the need for increased data sharing. As the analysis shows, that change has been rapid and needs to be observed historically as done here so that we may hold those narratives to account.