1 Introduction

The rise of artificial intelligence (AI) technology and its transformative impact across a wide range of issues pose new challenges to policymakers and other stakeholders around the globe. Whether one looks at the near, medium, or long term, there arise a myriad of legal and ethical challenges and even existential risks that societies need to address. These risks are exacerbated by a lack of effective global governance mechanisms to provide, at minimum, guardrails steering AI in beneficial directions [1, 2].

Taken together, the unprecedented advances in AI development and deployment over the past years led to a challenging and rapidly evolving research agenda of issues that touches upon various aspects of the digital ecosystem. Researchers look at questions pertaining to ethics, regulation, and governance, with a strong normative push to ensure that the potential malign consequences of AI are controlled and the benefits fairly distributed [3,4,5,6,7]. Important conceptual and technical work sheds light on foundational issues like algorithmic transparency, explainability, or safety, as well as on related aspects such as data governance or privacy [8,9,10,11]. Similarly, the ethical aspects and their implementation in code, as well as organizational governance, have received many valuable contributions [8, 12,13,14,15].

However, as the AI ethics research community develops frameworks and technical governance models to ensure that AI is designed and employed ethically, it must not forget about the global dimension of AI policyFootnote 1 [17, 18]. Values-based approaches, ethics-by-design, and other principled suggestions must also be translated into a functional system of rules, binding agreements, and international governance mechanisms that go beyond voluntary self-commitments or hollow AI strategies.

Partially, this work will take place at the national or even sub-national level. However, to a large extend, AI policy will be shaped internationally. Cutting-edge AI research is already a global enterprise dominated by large transnational technology firms. Moreover, the cross-border nature of the digital ecosystem renders purely national regulatory regimes inefficient and costly. Hence, big parts of the discussion around ethical AI and AI governance, just as this article, focus on the international level.

Furthermore, AI development does not happen in a vacuum, and as it gains salience on the political agenda, this also results in geopolitical, strategic considerations taking over the way that governments around the world position themselves towards potential regulatory or legislative measures [19, 20]. Global governance scholarshipFootnote 2 provides a useful lens to understand and explain these developments.

Therefore, this article sets out to describe and evaluate the rapidly evolving emergent global AI governance landscape. To do so, it organizes actors and initiatives in a two-by-two matrix, distinguishing between the nature of the driving actor(s), and whether these take place within the existing architecture or instead create novel instruments. As the overview and subsequent analysis shows, multiple initiatives compete for influence in a fragmented landscape. Many of these are state-led, but international organizations have demonstrated a surprising level of agency in addressing AI policy. And even though AI is a novel technology going beyond the scope of established regulatory or legal governance mechanisms, there is a tendency to address these new challenges within existing frameworks. The final section discusses the findings and argues that we begin to see signs of consolidation of a nascent AI regime that is polycentric and fragmented but gravitates around the OECD, which holds considerable epistemic authority and norm-setting power.

2 Literature review

The rich and fast-growing scholarship on AI ethics often touches upon political questions and issues of international relations [12]. Nevertheless, literature specifically addressing global AI governance or international AI policy is rare to find. This article aims to contribute to recent work on the important intersection of AI ethics and governance [17, 21, 22].

In outlining a research agenda for AI governance, Dafoe (2018) laments the neglect of political science in understanding problems of AI governance. He emphasizes the role of the discipline in shaping AI politics and devising visions for ideal governance [23]. This notion is echoed by Parson et al. [24], who put a focus on the social processes by which AI technologies are developed and applied.

The most related to the present article is an instructive overview by Butcher and Beridze, in which the authors provide an overview of current AI governance activities. They cover many notable examples across the private sector, public sector, research and multi-stakeholder organizations, and the UN [25]. AI development is fast-paced, however, and so is the political environment in which AI governance is shaped. Hence, the present article’s updated and expanded mapping presents a vital contribution to the literature.

Other recent work has covered the topic from various angles, such as the role of international standards [1], national AI strategies [26] or ethics guidelines [21, 27]. Valuable contributions have also been made theorizing about the governance design of international agreements [16, 28].

The latter contributions have shifted focus away from observing existing governance architectures to analyzing and theorizing how they ought to be designed. This strand of research is more prescriptive and directly feeds into important policymaking considerations. One of the earlier contributions in this space investigates the role and competencies of different institutions in managing or regulating AI, even proposing a regulatory regime for AI [29]. However, this and similar work focus on the national rather than the international level. Researchers have also proposed specific initiatives for the US to foster international cooperation on AI [30, 31] or even set up new international bodies [32]. Looking explicitly at global governance institutions, scholars have recently drawn up a leadership role for the G20 in defining global public policy on AI [33,34,35].

These treatises of ideal AI governance are certainly important. Yet, there is still a great gap in better describing and theorizing the current state of AI governance. Researchers need to study the legal and governance responses to AI, “both within traditional legal and regulatory settings, and in new institutional mechanisms and settings” [24]. Reaching a deeper understanding of the political dynamics at the global level will help answer questions of how to move from the current to the ideal.

In light of this, the contribution of this article is three-fold: (1) the two-by-two matrix provides a useful analytical tool for organizing and evaluating the current governance landscape, serving as a reference point. (2) The empirical work on important governance actors and initiatives, together with the analysis of their trajectories and connections, helps answer important questions about the dynamic evolution of AI governance. (3) The discussion of these findings brings to light important features and patterns of the nascent AI regime. All this, ultimately, allows to extrapolate and anticipate future directions of travel and ask questions to guide further research.

3 Mapping the current global AI governance landscape

The current global AI governance landscape displays a multitude of governance initiatives by various actors, some dealing with the regulation of very specific AI applications and others with more general, abstract principles of AI ethics and policy. By now, many countries have brought forward their own AI strategies, often with direct reference to the international level and questions of global AI governance. While these alone are valuable objects of study, this article focuses on those actors and initiatives that are by nature transnational or multilateral, i.e., that involve stakeholders from more than two countries. At this stage, almost none of them entail binding legislation, but rather political declarations, ethical principles, or partnerships.

There are many ways to organize these actors and initiatives, structuring them by their regional or topical scope, by the actors’ nature (e.g., governmental, business, civil society) or by the kind of instrument involved (e.g., international treaty or organization, alliances or partnerships, political declarations). This article employs a two-by-two matrix (Table 1) that distinguishes the following characteristics: (a) between action that is embedded in the existing governance architecture vs action that establishes new instruments; and (b) between state-led initiatives and non-state-led initiatives.Footnote 3 Note that the latter dimension considers the origin or agency of action, not necessarily the organizational nature through which it is ultimately carried out.Footnote 4

Table 1 An overview of the most important multilateral governance initiatives and actors in the AI domain

This overview is by no means comprehensive. There are dozens of other actors and initiatives engaging in AI governance. However, the present sample confidently includes the most important ones to date, disregarding others (see Sect. 2.5) for the sake of feasibility and analytical precision.Footnote 5

The following section briefly summarizes and contextualizes the different actors and initiatives, highlighting their trajectories and connections. This exercise is mostly agnostic to the content of what these global governance initiatives and arrangements actually entail. Focusing the analysis on actors and instruments was a deliberate choice to avoid confusion between structure and content.

The fragmented landscape that emerges from this exercise is congruent with other authors’ characterizations of the nascent global AI governance architecture as an “unorganised” and “immature field” [25]. Alongside the different actors, we find epistemic communities [36] that are well-connected and often overlap. Governance actors differ in their agenda-setting and norm-setting powers [21]. The analysis also shows the rapid progress and first signs of consolidation and convergence. Furthermore, the observed dynamics shed some light on the type of entities involved in the early design of AI governance—which is marked more by the utilization of existing governance instruments than by institutional innovation. In addition, it demonstrates a surprisingly high level of agency by international organizations. These findings are discussed in more detail in the last section.

3.1 State-led initiatives embedded in the existing architecture

The subset of “state-led initiatives embedded in the existing architecture” includes four cases presented below. The timeline of developments gives testament to the often slow and laborious processes of international diplomacy: Governments started to treat AI-related policy challenges seriously within international fora beginning in 2016. Since then, the topic has risen in priority, as shown by its ascent from the ministerial to the leaders’ level over time.

3.1.1 G7

The G7 has been a popular forum for leaders of some of the largest democracies to discuss AI issues. Initially, discussions were held at the level of ministerial meetings but were later brought up to the leaders’ level. The G7 ICT Ministerial 2016 in Japan and 2017 in Italy resulted in a statement outlining a vision of human-centric AI for innovation and economic growth. Then, in March 2018, the G7 innovation ministers agreed on a “Statement on Artificial Intelligence.” Building on this statement, the Canadian G7 presidency hosted the “G7 Multistakeholder Conference on Artificial Intelligence” in December 2018, convening over 200 AI experts from the G7 countries and beyond.

The most notable action at the leaders’ level was when in June 2018, the G7 committed to the Charlevoix Common Vision for the Future of Artificial Intelligence. It includes 12 commitments to promote human-centric AI fostering economic growth, societal trust, and equality and inclusion.

Since then, the most notable development within the G7 framework has been the inception of the Global Partnership on Artificial Intelligence (GPAI). Because it quickly expanded beyond the G7 both in membership scope and organizational structure, it is listed amongst the “state-led initiatives creating new instruments” and will be discussed in more detail in the next section.

3.1.2 G20

Trailing the course of the G7, the G20 got active on AI policy a little later. In June 2019, under Japanese leadership, members agreed on a ministerial statement that focused on human-centered AI. Even more noteworthy, they endorsed the OECD’s set of principles on trustworthy AI (see next section). This can be seen as a major achievement of the OECD, who this way expanded its reach to some of the major players outside its membership base (esp. China and Russia).

The commitment to advance the G20 AI principles was confirmed in 2020 by its digital ministers, under the Saudi Arabian presidency. At the meeting, countries also collected examples of national strategies and policy initiatives aimed at trustworthy AI. Promisingly, even China seems to have fully subscribed to the G20 AI principles, as a speech by Xi Jinping delivered at the G20 leaders’ summit in November 2020 indicates [37].

These signs of activism at the G20 have led some observers to call for its primacy in global AI governance [33] and the establishment of a “G20 coordinating committee for the governance of artificial intelligence” [34]. However, no such progress has materialized to date.

3.1.3 CCW GGE

Targeting only a specific application of AI, namely lethal autonomous weapons systems (LAWS), the Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE) meets since 2017 within the framework of the United Nations Convention on Certain Conventional Weapons (CCW). With representatives from on average 90 states [38], the GGE can be considered the broadest international forum for talks on issues directly related to AI applications. The meetings usually take place once per year for a 1-week-session and have brought to the table the most relevant states.

A first breakthrough was achieved in 2018 when members identified ten guiding principles related to important aspects such as human responsibility and international humanitarian law. Yet, discussions have proven to be slow and difficult due to the lack of consensus on agenda items. A coalition of civil society organizations and countries have called for a legally binding instrument banning LAWS. However, given the continued resistance of major players such as the US, France, and Russia, a breakthrough towards an effective governance framework seems unlikely at this stage [39]. Also, the unique characteristics of AI-enabled weapon systems have some observers wondering whether traditional approaches to arms control and disarmament are suitable at all [19].

3.1.4 Council of Europe

The Council of Europe (CoE) has made splashes with its foray into AI governance. In September 2019, the CoE’s executive body—consisting of the member states’ foreign affairs ministers—established the Ad-Hoc Committee on AI (CAHAI). It was tasked with “examining, through broad multistakeholder consultations, the feasibility and potential elements of a legal framework for the development, design, and application of AI” [40].

In addition to bringing together member and observer states’ views as well as input from civil society, academia, and the private sector, the CAHAI is also cooperating closely with other international institutions, such as UNESCO, the OECD and the European Commission.

CAHAI released a comprehensive collection of government contributions in its interim report [41]—which curiously was funded by Japan, itself not a member of the CoE (but an observer since 1996). In the report, the organization reaffirms its “clear role to address the issue of the development and uses of AI” and proposes to work towards a horizontal legal instrument whose principles could serve as a basis for more specialized texts. Whether such an instrument will eventually materialize remains to be seen—the roadmap foresees a final report and a decision by member states by the end of 2021.

In any case, the CAHAI already developed into an important forum developing knowledge and stimulating exchange between almost 50 states. Since the membership includes a diverse set of actors with at times opposing interests regarding AI development (e.g., Russia is a member of the CoE as well), agreement on actual binding outcomes seems unlikely—though this would no doubt be a substantial step towards global AI governance.

3.2 State-led initiatives creating new instruments

The subset of “state-led initiatives creating new instruments” is substantially smaller than the previous one. It consists of only two initiatives, both of which are predominantly driven by developed democracies.

3.2.1 GPAI

The first one is the Global Partnership for AI (GPAI), as mentioned above. The GPAI was originally introduced in 2017 by Canada and France under a different name (International Panel on Artificial Intelligence). The initial response was timid, and the proposal long-faced strong reluctance from the Trump administration over concerns that moves towards any sort of regulation might hamper innovation in AI. Finally, in May 2019, the US changed course, now considering the GPAI as a useful tool in restricting China’s influence on the emerging global AI governance system [42].

GPAI was officially launched in June 2020 with a total of 15 founding members.Footnote 6 By December 2020, Brazil, the Netherlands, Poland and Spain had joined the partnership, underscoring the partnership’s appeal and potential for expansion. The GPAI’s stated aim—grounded in human rights, inclusion, diversity, innovation, and economic growth—is to guide the responsible development and use of AI. By bringing together experts from industry, government, civil society and academia, it hopes to facilitate international collaboration and act as a global reference point for specific AI issues [43]. Importantly, it also adheres to the OECD’s Principles on AI, another sign of the OECD’s successful role as a global norm-setter.

The GPAI’s evolution is an interesting case: conceived within the existing architecture, it was then launched as a separate, standalone initiative with a unique membership base going beyond the G7. Thus, it demonstrates characteristics of a new, standalone instrument, while ultimately ending up hosted by one of the existing international organizations (OECD).

The GPAI is arguably the most advanced global AI governance instrument to date, with a permanent secretariat and a relatively broad membership base. While it is so far missing major players such as China and Russia, an incoming Biden administration could potentially make the US more inclined to finding consensus and thus broaden the alliance further. In any case, the OECD’s role as a host of the GPAI will be useful in avoiding policy incoherence and fragmentation, given that the OECD is also closely aligned with the G20 (see above).

3.2.2 AI Partnership for Defense

Another recent state-led initiative is the US-driven AI Partnership for Defense. Six NATO members as well as other US allies such as Israel, Japan, and Sweden, followed an invitation by the Pentagon’s Joint Artificial Intelligence Center to a virtual conference in September 2020. Discussions ranged from policy issues such as ethical principles to military use cases of AI and scope for technical cooperation [44]. It remains to be seen how this lose group continues under a Biden administration, but in any case, the large response by partners signals the growing interest for cooperation in security and defense matters related to AI. This is also backed by statements from high-ranking NATO officials which have supported increased transatlantic cooperation on this matter in the past [45].

3.3 Non-state-led initiatives embedded in the existing architecture

Global governance literature brought to the international relations scholarship a stronger focus on non-state actors, ranging from the important role of international organizations and businesses to civil society actors and non-governmental organizations [46, 47]. In this vein, this mapping of the emergent global AI governance landscape would not be complete without looking at non-state-led initiatives.

This section fields several international organizations, namely the UN, EU, and the OECD. All three organizations have actively sought to take over leadership roles in the previously unoccupied space of AI governance. This section also includes some international standard-setting organizations, who are competing over the formulation of a variety of standards—from narrowly technical to more general and political—in the AI domain.

3.3.1 UN

The most obvious candidate to look at when describing any global governance system is the United Nations (UN). Secretary-General António Guterres has emphasized the impact of emerging technologies, including AI. In 2018, he established a High-Level Panel on Digital Cooperation, a multi-year, multi-stakeholder, global effort to address a range of issues related to the Internet, artificial intelligence, and other digital technologies. Its results were presented as a “Roadmap for Digital Cooperation” in June 2020 and included a recommendation on global AI cooperation for AI that is “trustworthy, human-rights based, safe and sustainable and promotes peace” [48]. In the roadmap, Guterres states his intention to establish a multi-stakeholder advisory body on global AI cooperation, comprising member states, relevant UN entities, interested companies, academic institutions, and civil society groups. The body should “serve as a diverse forum to share and promote best practices, as well as exchange views on artificial intelligence standardization and compliance efforts.” Besides, he committed to appointing an Envoy on Technology by 2021. It remains to be seen whether these intentions will be followed up on in 2021, and whether their outcomes will match expectations.

Other parts of the UN system are also becoming engaged in AI governance: already in 2015, the UN Interregional Crime and Justice Research Institute (UNICRI) launched a programme on AI and robotics. That same year, AI governance was discussed for the first time during the 70th UN General Assembly [25]. The UN Institute for Disarmament Research (UNIDIR) supports the work of the GGE on LAWS. UNIDIR and the UN University Centre for Policy Research (UNU-CPR) have also set up research projects to explore AI-related policy challenges.

More and more UN agencies are looking at AI, both as a disruptive technology to their respective policy domain and as a tool to achieve the Sustainable Development Goals [49]. For instance, since 2017, the International Telecommunication Union (ITU) co-organizes an annual AI for Good Global Summit. Then there is UN Global Pulse, the Secretary-General’s initiative on AI for humanitarian aid and development, which also looks at AI governance. Amongst its work, it convenes an Expert Group on Governance of Data and AI, bringing together international leaders from the public and private sector, civil society, and the legal community [50]. All these efforts go hand in hand with the wider AI for Social Good (AI4SG) movement, aimed at establishing interdisciplinary partnerships centered around AI applications towards SDGs [51].

These various efforts establish the UN as a global convening platform for stakeholders interested in exploring how AI can contribute to achieving the SDGs and solve global problems. This gives the UN considerable epistemic authority, though somewhat undermined by the multitude of initiatives and workstreams causing inconsistency and complexity. It also allows it to support consensus-building between states to promote common goals, thus making AI governance more effective [25].

3.3.2 European Commission

The European Commission took action on AI policy even before most EU member states did. With the release of its AI strategy in April 2018, it established the High-Level Expert Group on Artificial Intelligence, whose 52 members have been at the fore of global debate around AI regulation and governance questions. The Commission furthermore chartered new ground with the release of its widely-noted ‘Ethics Guidelines for Trustworthy AI’ in April 2019.

Since then, the Commission published a White Paper on AI in February 2020 as a preparatory step for a forthcoming legislative proposal on AI. During a month-long open consultation process, Brussels has probed reactions to its proposed approaches for regulating and governing the technology's development and application. The gathered feedback should be transformed into a legislative proposal that is expected in the first half of 2021 and setting the EU on track to become the first major jurisdiction worldwide with a binding legal framework for AI.

The Commission’s overall goal is to chart a so-called “European third way” for AI development, which policy-makers frame as "human-centric", “ethical”, and "trustworthy.” If (or rather, when) translated into hard law, this will undoubtedly have repercussions well beyond the EU’s direct jurisdiction, as has been demonstrated by the global reach of the EU’s privacy directive GDPR.Footnote 7

In addition to these strategic, regulatory and legislative approaches, the European Commission acts in close coordination with its member states, mainly through the Coordinated Plan on AI. It also liaises with the wider AI community and especially with industry, bringing together over 4000 representatives via its European AI Alliance—a multi-stakeholder forum launched in June 2018. On the global level, it is engaging with most other actors listed in this overview. Moreover, it is a founding member of GPAI, underscoring its active role in the international arena.

3.3.3 OECD

Another well-known international organization that has sought ownership of AI-related governance issues is the Organisation for Economic Co-operation and Development (OECD). Back in 2016, the OECD’s Committee on Digital Economy Policy began discussing the need for AI principles and established an expert group in May 2018. The resulting “OECD Principles on AI” were adopted in May 2019 as the “first set of intergovernmental policy guidelines on AI” and included commitments to trustworthy, human-centered AI [54]. Beyond the OECD members, Argentina, Brazil, Costa Rica, Malta, Peru, Romania, and Ukraine have also signed up to the AI principles, signaling broad international appeal.

In addition to this work, the OECD built up a considerable public-access knowledge base called the OECD.AI Policy Observatory, launched in February 2020, to help policymakers implement the AI principles and further inform the global discourse on AI governance. Also, the OECD Network of Experts on AI (ONE AI), a multi-disciplinary and multi-stakeholder group, was set up to provides AI-specific policy advice and foster international cooperation.

These efforts paid off: when France and Canada used their G7 presidencies and together with 13 other founding members launched the GPAI as discussed in the previous section, they decided to host its secretariat at the OECD. This hybrid structure has the potential to foster synergies between the OECD-led work on global AI policy and the GPAI’s more technical discourse [43]. Furthermore, as mentioned above, the OECD’s principles were endorsed by the G20, which includes China and Russia, thus giving it an even broader international reach. The principles also serve as the basis for the work of the GPAI, thus anchoring the alliance firmly within the OECD’s sphere of influence—both organizationally (hosting of the secretariat) as well as normatively.

Granted, the OECD does not have any regulatory or legislative power, including on AI policy. In any case, binding international treaties that regulate the development and use of AI horizontally seem far-fetched at this point. What remains is soft power—the ability to influence global AI governance through epistemic authority, convening power, and norm- and agenda-setting. In this realm, the OECD has demonstrated considerable strength.

3.3.4 Standards organizations

The subset of “non-state-led initiatives embedded in the existing architecture” also includes international standard-setting organizations, whose membership base is usually dominated by industry and business associations.Footnote 8 In the following, the article describes the ongoing work on AI standards at the two leading international standards bodies. Their work tends to be rather technical. However, standards—and especially international standards—undoubtedly affect the development and roll-out of AI technology and, by extension, the corresponding regulatory and governance domains.Footnote 9 Furthermore, the epistemic authority of these standard-setting bodies informs and influences policymaking by other actors directly and indirectly.

The International Organization for Standardization (ISO) together with the International Electrotechnical Commission (IEC) have a dedicated AI sub-committee, ISO/IEC JTC 1/SC 42, since 2017. While most of its work is dedicated to technical aspects, it explicitly frames its work on AI standardization as a new “holistic ecosystem” approach that also considers ethical and societal concerns [55]. Its members actively engage with other relevant global stakeholders such as the OECD, the European Commission, and the Partnership on AI [55].

Furthermore, important efforts are being made at the IEEE Standards Association at least since 2016. Its “IEEE Global Initiative on Ethics of A/IS” aims at, inter alia, “global consensus building to inspire the Ethically Aligned Design of autonomous and intelligent technologies” [56]. Out of this initiative have sprung several relevant work strands and publications.Footnote 10 Grouped under the IEEE P7000 standards family, 14 AI-related standards are currently being developed in Working Groups.

Lastly, the IEEE initiative has also contributed to the establishment of the Open Community for Ethics in Autonomous and Intelligent Systems (OCEANIS) in July 2018. Gathering more than 70 organizations, it is devised as a global forum for exchange and collaboration in the ethical development and use of AI-related standards.

As these organizations continue to develop standards, they will undoubtedly shape the development and use of AI. With their institutional capacity to achieve expert consensus and then promulgate standards internationally, which then become enforced either de facto or de jure, they exercise certain norm-setting powers. Their role as a global reference point, and their intensive exchanges with other global governance actors, also gives them considerable epistemic authority as well as convening and agenda-setting power.

3.4 Non-state-led initiatives creating new instruments

3.4.1 PAI

Unlike its state-led namesake GPAI, the Partnership on AI (PAI) was born out of an alliance of non-state actors, that is big American tech companies at the forefront of AI development. It was established in late 2016 by a group of AI researchers representing Apple, Amazon, DeepMind and Google, Facebook, IBM, and Microsoft. One year later, this business-centered setup was expanded to include six not-for-profit board members, thus turning the PAI into a multi-stakeholder organization, which today convenes more than 50 member organizations.

PAI is actively supporting research on many pressing issues related to AI ethics and governance. Besides acting as a convener and knowledge incubator, it also facilitates educational projects as well as practical tools such as the recently launched AI Incident Database (AIID). Since November 2020, the AIID documents failures of AI systems around the world. The idea of this publicly available repository is to disseminate knowledge and improve the safety of AI systems deployed in the real world. The AIID is inspired by incident databases in the aviation and computer security industries. The usefulness of such databases is undisputed; it allows developers to learn from their peers’ mistakes and opens up research avenues for external observers who can thus gauge the AI world both for episodical and systemic risks. The AIID is still in its infancy and it is too early to tell whether the wider AI community accepts it as a tool of reference. However, its early-mover advantage and broad membership base enable the PAI to establish itself as a cornerstone in the emerging AI governance landscape. Since AIID is developed as an open-source project collectively governed by the PAI, it reminds us of the origins of Internet governance [57]. Whether the AI community will continue in this collaborative path remains to be seen. We simultaneously observe signs that as the AI industry matures, it also increasingly moves to proprietary models and favors commercial over common interest [58, 59].

3.5 Other actors: NGOs, research institutes, and global movements

Besides the above-mentioned actors and initiatives, there are dozens—if not hundreds—that also affect global AI governance in one way or another. These include non-governmental organizations, research institutes, public sector entities (e.g., cities and regional governments) or global movements (e.g., Campaign to Stop Killer Robots). Taken together, these provide a considerable epistemic source and their engagement in agenda-setting should not be understated. Their regular interactions with the actors discussed in previous sections can indirectly influence outcomes at the global level.

Nevertheless, their individual impact is comparably low and hence outside the scope of this study. Causes may be either that their approach to AI governance is too specific (i.e., focusing on only one specific aspect or sector of AI) or too tangential (i.e., initiatives addressing the wider digital ecosystem and only mentioning AI in passing), or simply that they lack the political clout to make their voices heard. This last point especially speaks to the important debate about inclusivity and participation in AI governance [60, 61].

4 Discussion

The preceding overview of the global AI governance landscape allows for several relevant observations, which are discussed in the following section.

First, there is a clear tendency to accommodate governance initiatives within the existing architecture, both by state and non-state actors. This could have several potential explanations. States and other global governance actors might be wary of foundational innovation and starting from scratch. Instead, they prefer to build on existing, proven governance arrangements. Alternatively, more attempts might have been made with new instruments and these might simply have been less fruitful and thus did not feature in this overview. In any case, the case of the GPAI suggests a gravitational pull towards established governance mechanisms.

Second, there is a fairly equitable distribution of labor between national governments (state-led) and international organizations (non-state-led). The community of international organizations moved early to occupy an open policy space, thus carving out a considerable competence vis-à-vis its member states. These, in return, offloaded some of the AI policy work to international organizations (CoE, OECD via GPAI). This would suggest that states accept their role as useful fora for international cooperation and the steering of AI development into globally beneficial directions. However, global coordination in this realm has so far not touched upon legally binding treaties. It may well be that governments decided to transfer some authority to IOs only as long as they deal with rather abstract principles or soft governance, but would withdraw or stall as soon as work proceeds towards more regulatory, hard governance. Whether the CoE will produce any meaningful conclusions by the end of the year may be a good indication of the potential for such binding international rules.

Thirdly, international standards organizations play a role in the development of AI governance, as is the case for most emerging technologies. More worrying is the shift towards geopolitics: in the last years, the development of international AI standards has increasingly received attention from key governments such as China, the EU, and the US. Their renewed interest and subsequent strategic engagement risks contention and the encroachment of geopolitical considerations into domains that ought to be technical [62, 63]. This may not only affect the quality of standards but also obstruct debates around AI ethics. As standards cannot be completely detached from the policy world, scholars of global AI governance need to have a sound understanding of the proceedings in the international standard-setting arena. Future research should explore the interactions and means by which governments aim to steer the development of standards to further their own perceived interests.

Lastly, sub-state actors from the public sector are practically not present in the discussions around global AI governance. This is in stark contrast to other policy domains such as global climate change governance, where city networks play an important role. It is also a bit surprising, given that cities are one of the focal points of AI rollout and several cities have subsequently taken notable actions with regards to AI policy. However, to date, these actions are isolated and do not engage at the supranational or global level.

In light of the fuzzy nature of AI, it is barely surprising that the current landscape is somewhat fragmented. Promising moves towards some degree of centralization and coordination are found in the prominent role of the OECD. With its epistemic authority and its norm- and agenda-setting power, it managed to act as a reference point for the G7 and G20. Through its close collaboration with other multilateral actors such as the European Commission, the UN, and the CoE, and by using the GPAI as a dedicated tool for advancing global AI governance, it may continue to play a leading role.

With all this in mind, this article argues that we are witnessing the first signs of consolidation in this fragmented landscape. The nascent AI regime that emerges is polycentric and fragmented but gravitates around the OECD, which holds considerable epistemic authority and norm-setting power. It is polycentric because it features different epistemic communities and multiple centers of decision-making, each operating with some degree of autonomy. It is fragmented because there is substantial overlap in different actors’ membership and the topics addressed by these initiatives; the well-connected epistemic communities are equally overlapping. As with other polycentric governance architectures, global AI governance will likely continue to struggle with the challenge of coordination [64]. While epistemic and membership overlap may benefit consolidation or convergence, topic overlap tends to foster fragmentation and adds complexity to the regime.

This article has been mostly agnostic to the content of what these global governance initiatives and arrangements actually entail. It was a deliberate choice to focus the analysis on structure, actors and instruments, to avoid confusion between structure and content. Nevertheless, a quick look at the main developments suggests that there is convergence on a certain type of AI values and principles, as put forward by the European Commission and the OECD. These are focusing on trustworthy, human-centric AI.

Such terms are of course abstract and somewhat vague, thus leaving room for interpretation. This interpretation, contextualization, and operationalization of AI values will without doubt experience major contestation by different actors. While China is side-lined from most of the above initiatives, its role in AI governance cannot be understated. The government has signaled willingness to engage in global governance as a responsible actor, and specifically on AI ethics has made some steps towards conciliation. Yet, it will want to interpret AI ethics in accordance with its own cultural context and promote these views globally. Hence, how China engages with the GPAI and other governance initiatives (and vice-versa) will be an interesting space to watch and leaves ample room for future research.

5 Conclusion

This article outlined the current state of play in global AI governance by describing the most important multilateral initiatives. It thus contributes to the growing body of literature aimed at understanding and engaging with the rapidly evolving global AI governance architecture. It organized individual actors and initiatives in a two-by-two matrix, distinguishing between the nature of the driving factor(s) and whether or not their actions take place within the existing governance architecture. Based on this, it provided an overview of key actors and initiatives, highlighting their trajectories and connections. Lastly, it has been argued that we are witnessing the first signs of consolidation in this fragmented landscape. The nascent AI regime that emerges is polycentric and fragmented but gravitates around the OECD, which holds considerable epistemic authority and norm-setting power.

The analysis has traced interlinkages and sequential developments which shed additional light on the evolving nature of this dynamical field. It also brought to light valuable insights into the emergent governance regime: most interactions are accommodated within the existing governance architecture, such as the UN system, established IOs, and international standard-setting bodies. Supranational organizations such as the EU and the OECD have demonstrated remarkable agency in shaping global AI governance.

Whether these observations parallel developments in other global governance architectures (e.g., climate, nuclear safety, or Internet) would be an interesting avenue for future research. Also, the complementary future analysis could look in more detail at the strategies and actions of nation-states and how these engage on the global level in shaping AI governance.

Building on such descriptive empirical work, further research on global AI governance could engage more thoroughly with analytical and theoretical questions. We might ask, for instance, how this nascent global AI governance system fits into the wider global governance architecture (see [65]). Or, in the absence of singular central authority in the global AI governance system, what polcycentricity theory [66, 67] can tell us about the way in which actors mutually adjust and order relationships with one another.