1 Introduction

Artificial intelligence has been identified as a key suite of technologies for many countries worldwide, in large part motivated by its general-purpose nature (Brynjolfsson and McAfee 2014). AI technologies, and machine learning techniques in particular, are being fruitfully applied to a vast range of domains, including language translation, scientific research, education, logistics, transport and many others. It is clear that AI will affect economies, societies and cultures profoundly at a national, international and global level. This has resulted in increasing attention being paid to both AI ethics: questions about how we should develop and deploy AI systems, given their potential impact on wellbeing and other deeply held values such as autonomy or dignity; and to AI governance: the more practical challenge of ensuring the ethical use of AI in society, be that through regulation, governance frameworks, or ‘softer’ approaches such as standards and ethical guidelines.Footnote 1

Cross-cultural cooperation will be essential for the success of these ethics and governance initiatives. By ‘cross-cultural cooperation’, we mean groups from different cultures and nations working together on ensuring that AI is developed, deployed, and governed in societally beneficial ways. In this paper, we focus in particular on cooperation extending across national boundaries. Examples include (but are not limited to) the following: AI researchers from different countries collaborating on projects to develop systems in safe and responsible ways; establishing networks to ensure that diverse global perspectives can feed equally into international discussions about the ethical issues raised by AI; and involving a range of global stakeholders in the development of practical principles, standards, and regulation. By encouraging cross-cultural cooperation, we do not necessarily mean that all parts of the world should be subject to the same norms, standards, and regulation relating to AI, or that international agreement is always needed. Identifying which issues will need global standards or agreements, and where more cultural variation is needed, is itself a key challenge that will require cooperation to address.

Cross-cultural cooperation is important for several reasons. First, cooperation will be essential if AI is to bring about broad benefits across societies globally, enabling advances in one part of the world to be shared with other countries, and ensuring that no part of society is neglected or disproportionately negatively impacted by AI. Second, cooperation enables researchers around the world to share expertise, resources, and best practices. This enables faster progress both on beneficial AI applications, and on managing the ethical and safety-critical issues that may arise. Third, in the absence of cooperation, there is a risk that competitive pressures between states or commercial ecosystems may lead to underinvestment in safe, ethical, and socially beneficial AI development (Askell et al. 2019; Ying 2019). Finally, international cooperation is also important for more practical reasons, to ensure that applications of AI that are set to cross-national and regional boundaries (such as those used in major search engines or autonomous vehicles) can interact successfully with a sufficient range of different regulatory environments and other technologies in different regions (Cihon 2019).

Drawing on the insights of a group of leading scholars from East Asia and Europe,Footnote 2 we analyse current barriers to cross-cultural cooperation on AI ethics and governance, and how they might be overcome. We focus on cooperation between Europe and North America on the one hand and East Asia on the other. These regions are currently playing an outsized role in influencing the global conversation on AI ethics and governance (Jobin et al. 2019), and much has been written recently about competition and tensions between nations in these regions in the domains of AI development and governance, especially in the case of China and the USA. However, our discussion and recommendations have implications for broader international cooperation around AI, and we hope they will spur more attention to promoting cooperation across a wider range of regions.

As AI systems become more capable and their applications more impactful and ubiquitous, the stakes will only get higher. Establishing cooperation over time may become more difficult, especially if cross-cultural misunderstandings and mistrust become entrenched in intellectual and public discussions. If this is the case, then the earlier a global culture of cooperation can be established, the better. Cultivating a shared understanding and deep cooperative relationships around guiding the impacts of AI should therefore be seen as an immediate and pressing challenge for global society.

2 The Role of North America, Europe, and East Asia in Shaping the Global AI Conversation

North America, Europe, and East Asia in particular are investing heavily in both fundamental and applied AI research and development (Benaich and Hogarth 2019; Haynes and Gbedemah 2019; Perrault et al. 2019), supported by corporate and government investment. Various analyses have framed progress on the development and deployment of AI between the USA and China in particular through a competitive lens (Simonite 2017; Allen and Husain 2017; Stewart 2017), though this framing has received criticism both on normative and descriptive grounds (Cave and ÓhÉigeartaigh 2018).

Scholars and policy communities in these regions are also taking deliberate and active steps to shape the development of ethical principles and governance recommendations for AI, both at a regional and global level. This is reflected in government-linked initiatives, such as the activities of the European Union’s High-Level Expert Group on Artificial Intelligence, which has produced ethics guidelines and policy and investment recommendations as its first two publicationsFootnote 3; the UK Government’s commitments to ‘work closely with international partners to build a common understanding of how to ensure the safe, ethical and innovative deployment of Artificial Intelligence’ (May 2018); and the Chinese Government’s similar commitment to ‘actively participate in global governance of AI, strengthen the study of major international common problems such as robot alienation and safety supervision, deepen international cooperation on AI laws and regulations, international rules’ (China State Council 2017). North America, Europe, and East Asia are each also contributing disproportionately towards international AI standards work ongoing in fora such as the International Organization for Standardization (ISO),Footnote 4 the Institute of Electrical and Electronics Engineers (IEEE),Footnote 5 and the Organization for Economic Co-operation and Development (OECD).Footnote 6

The prominence of North America, Europe, and East Asia is further seen in the leadership and composition of multi-stakeholder and nongovernmental initiatives such as the Partnership on AI,Footnote 7 the Future Society,Footnote 8 the International Congress for the Governance of AI,Footnote 9 and the Global Partnership on AI (Hudson 2019).Footnote 10A large majority of the most prominent conferences on AI ethics and governance have taken place in these regions, including the US-based Beneficial AI conference series,Footnote 11 the Beijing Academy of AI Conference series,Footnote 12 the Beijing Forum,Footnote 13 the US-based Artificial Intelligence, Ethics, and Society conferences,Footnote 14 governance and ethics workshops attached to the leading machine learning conferences, and many more.

This combination of:

  1. a.

    technological leadership in North America, Europe, and East Asia;

  2. b.

    the outsized role of these regions in shaping the global ethics and governance conversation; and

  3. c.

    the underlying tension introduced by progress being framed through a competitive lens, and a perception of disagreements on fundamental ethical and governance issues

leads us to focus on the barriers that exist to productive intellectual exchange and cooperation between these regions and cultures in particular. A full analysis of cross-cultural cooperation on AI ethics and governance, which is outside the scope of this paper, must consider the roles of all nations and cultures, given that the domain of impact of AI technologies is truly global. It will be particularly important for further work to address the inequalities in power and influence that are emerging between technology-leading nations and those to which these technologies are being exported (Lee 2017), and the responsibility of technology-leading nations to include and empower those nations in global governance and ethics conversations.

3 Barriers to Cross-cultural Cooperation on AI

Despite the emergence of several international alliances in AI ethics and governance, many barriers remain to achieving real cross-cultural cooperation on the norms, principles, and governance frameworks that should guide how AI is developed and used.

Mistrust between different regions and cultures is one of the biggest barriers to international cooperation in AI ethics and governance. At present, there is a particular environment of mistrust between scholars, technologists, and policymakers in the USA and China.Footnote 15 This culture of mistrust is underpinned by both:

  1. a.

    a history of political tensions between these two powerful regions that has increased significantly in recent years, and is currently contributing to the competitive framing of AI development as a ‘race’ between ‘Eastern’ and ‘Western’ nations.Footnote 16

and

  1. b.

    the divergent philosophical traditions upon which these regions are founded, leading to a perception of significant and irresolvable value differences between ‘Western’ and ‘Eastern’ cultures on key issues such as data privacy (Larson 2018; Horowitz et al. 2018; Houser 2018).

A range of recent technological and political developments may also be contributing to this mistrust, including concerns about the public and political influence of technology giants in the USA (Ochigame 2019); perceptions of and reactions to the Chinese Social Credit score system (Chorzempa et al. 2018; Song 2019); and concerns about contentious uses of AI technologies, with notably controversial examples including the use of AI in immigration control (Whittaker et al. 2018), criminal risk assessment in the USA (Campolo et al. 2017), and in tracking communities such as the Uighur Muslim minority in China (Mozur 2019). Adversarial rhetoric from political and defence leaders in the USA also contributes to this tension. Recent examples reported in the media include stating intentions to ‘be the threat’ of AIFootnote 17; comments focused on the ‘otherness’ of China as an adversary,Footnote 18 amid broader concerns regarding Chinese technological progress as a threat to US global leadership (Jun 2018). A continued exacerbation of this culture of mistrust could severely undermine possibilities for global cooperation on AI development and governance.

In addition, it is unclear how far existing cross-cultural collaborations and alliances can go to shape the behaviour of actors that are as globally dominant as the USA, China, and the large multinational corporations based in these countries. Even if AI ethics frameworks can be agreed on in principle by multi-stakeholder groups, for example, it will be far from straightforward to implement them in practice to constrain the behaviour of those with disproportionate power to shape AI development and governance.

Another challenge for effective cooperation is balancing the need for global cooperation with the need for culturally and geographically sensitive differentiated approaches (Hagerty and Rubinov 2019). It is crucial we avoid a situation where one or two nations simply try to impose their values on the rest of the world (Acharya 2019). In certain specific domains, for example where AI is being used to support the delivery of healthcare, different cultures may perceive tradeoffs very differently (Feldman et al. 1999), and it may be not just possible but necessary to implement region-specific standards and governance. AI systems will also have different impacts as they are deployed in different cultural regions, which may also require different governance approaches (Hagerty and Rubinov 2019).

For some aspects of AI development and governance, however, cooperation will be much more crucial. For example, some potential uses of AI technologies in military contexts, such as in automated targeting and attack, could impinge upon human rights and international humanitarian law (Asaro 2012). Another concern is that by automating aspects of information gathering, decision-making, and response in military arenas, the potential for unwanted escalation in conflict situations may increase due to events occurring and requiring responses faster than is compatible with effective human judgement and oversight (Altmann 2019). In both these cases, there may be individual military advantages to nations pursuing the technology; but in the absence of international agreements and standards, the overall effect may be destabilizing.Footnote 19 International agreement will also be of particular importance for all cases in which AI technologies are developed in one region, but used or deployed in a different region. A key challenge for cross-cultural cooperation will therefore be to identify the areas where international agreement is most important, and distinguish these from areas where it is more appropriate to respect a plurality of approaches.

There are also more practical barriers to cooperation between nations: language barriers, lack of physical proximity, and immigration restrictions put limits on the ability of different cultures and research communities to communicate and collaborate. Furthermore, despite science being a global enterprise, the language of scientific publication remains predominantly English.

4 Overcoming These Barriers to Cooperation

While cross-cultural cooperation in AI ethics and governance will be genuinely challenging, we suggest that there are steps that can be taken today to make progress, without first needing to tackle the greater problems of finding consensus between cultures on all fundamental ethical and philosophical issues, or resolving decades of political tension between nations.

4.1 Building Greater Mutual Understanding, Including Around Disagreements

Mistrust between nations is a serious concern for the future of AI ethics and governance. However, we suggest that this mistrust is at least partly fuelled by misunderstandings and misperceptions, and that a first step towards building greater cross-cultural trust must therefore be to identify and correct important misperceptions, and enhance greater mutual understanding between cultures and nations.

It would be easy to assume that the main barrier to building trust between East and West around AI is that these regions of the world have very different fundamental values, leading them to different—perhaps conflicting—views of how AI should be developed, used, and governed from an ethical perspective. While value differences between cultures certainly exist, claims about how those differences manifest often depend on unexamined concepts and entrenched assumptions, and lack empirical evidence (Whittlestone et al. 2019). The idea that ‘Eastern’ and ‘Western’ ethical traditions are fundamentally in conflict also oversimplifies the relationship between the two. There are many different philosophical traditions that might be referred to under either heading: there are, for example, many important differences between relevant philosophical perspectives across China, Japan, and Korea (Gal 2019), and the values and perspectives of ‘Western’ philosophy have changed a great deal over time (Russell 1945). More broadly, ethical and cultural values in both regions are in constant evolution, as captured by projects such as the World Values SurveyFootnote 20 and the Asian Barometer.Footnote 21

Differences in ethical and cultural traditions and norms across regions are often assumed to underpin contrasting governance approaches. For example, privacy is often seen as an issue for which significant value differences exist between East and West, leading to a perception that laxer regulation and controls exist on data privacy in China compared with the USA and Europe. However, such claims are often made in very broad terms, without substantive evidence or analysis of how significant these differences are or how they manifest in practice (Ess 2005; Lü Yao-Huai 2005). This leads to misunderstandings in both directions. First, there are significant differences between the USA and Europe on both conceptions of privacy (Szeghalmi 2015) and regulations that relate to privacy (McCallister et al. 2018). These are often missed in Chinese perceptions of Western societies, which tend to focus just on the USA.Footnote 22 Second, Western perceptions of data privacy in China may be outdated: Lü Yao-Huai (2005) highlighted as early as 2005 that the relevant literature on information ethics was much younger in China than in the USA, but was evolving quickly and in a manner significantly informed by Western scholarship. A range of papers and reports from Chinese scholars and policymakers have highlighted the importance of data privacy in AI ethics and governance (Beijing Academy of Artificial Intelligence 2019; Ying 2019; Zeng et al. 2018; Ding 2018b). Principles around protecting individuals’ data privacy are also beginning to be borne out in regulatory action in China; over 100 apps have been banned by the government for user data privacy infringements, with dozens more being required to make changes relating to data collection and storage.Footnote 23 This is not to suggest that there are not meaningful differences in values, norms, and regulations relating to data privacy between these countries, but that such differences have often been oversimplified and are not well understood.

Another example of differing perceptions are those surrounding China’s social credit score (SCS) system. The SCS has been discussed with great concern in Western media, policy circles, and scholarship, and presented as an example of Orwellian social control by the Chinese government (Botsman 2017; Pence 2018), representative of a culture and government with profoundly different values to the West (Clover 2016). However, both Chinese and Western sources have argued that there are significant misunderstandings surrounding the SCS. Multiple scholars have pointed out that the SCS is not designed to be a single unified platform that rates all 1.4 billion Chinese citizens (as is often supposed), but rather a web of individual platforms with latitude for different interpretations, with social credit scores mostly given by financial institutions (as opposed to a big data-driven comprehensive rating) (Mistreanu 2019; Sithigh and Siems 2019). Song (2019) notes that many of the measures in the SCS are designed to tackle issues such as fraud and corruption in local government. Chorzempa et al. (2018) also highlight that ‘many of the key components of social credit, from blacklists to widespread surveillance… already exist in democracies like the United States.’ China’s SCS is likely to evolve significantly over time, and there are likely to be genuine reasons for concern both in terms of present and future implementation. However, a much clearer cross-cultural understanding of how the SCS works, is being used, and is impacting Chinese citizens, would allow dialogue on relevant ethics and governance issues to progress more constructively.

Given the lack of shared knowledge and discourse that has existed historically between regions such as the USA, Europe, and China, it is not surprising that many misperceptions exist between them. We should therefore be wary of jumping too quickly to assume intractable and fundamental disagreements. Misunderstandings clearly exist in both directions: analyses of public opinion survey data suggest, for example, that both American and Chinese populations hold various misperceptions about the other nation’s traits and characteristics (Johnston and Shen 2015). As mentioned above, in China, the diversity of Western societies is also often oversimplified down to a single pattern of American life. At the same time, the USA and Europe have historically struggled to understand China (Chen and Hu 2019), evidenced for example by repeated failures to predict China’s liberalization (or lack thereof) or periods of economic growth (The Economist 2018; Cowen 2019; Liu 2019). Language barriers present a particular difficulty for Western nations in gleaning what is happening in China in terms of AI development, ethics, and governance (Zhang 2017; Ding 2019). As Andrew Ng points out in a 2017 interview in the Atlantic: ‘The language issue creates a kind of asymmetry: Chinese researchers usually speak English so they have the benefit of access to all the work disseminated in English. The English-speaking community, on the other hand, is much less likely to have access to work within the Chinese AI community’ (Zhang 2017). For example, Tencent released a book on AI strategy (Tencent Research Institute et al. 2017) which includes deep analysis of ethics, governance, and societal impacts, but has received relatively little English-language coverage (Ding 2018a). Even on empirical matters such as level of Chinese public investment in AI research and development, widely reported figures in the USA may be inaccurate by an order of magnitude (Acharya and Arnold 2019).

The recently published Beijing AI Principles (Beijing Academy of Artificial Intelligence 2019) and similar principles developed around the world (Cowls and Floridi 2018) in fact show substantial overlap on key challenges (Zeng et al. 2018; Jobin et al. 2019). The Beijing Principles make clear reference to the key concepts and values which have been prominent in other documents, including that AI should ‘benefit all humankind’; respect ‘human privacy, dignity, freedom, autonomy and rights’; and be ‘as fair as possible, reducing possible discrimination and biases, improving its transparency, explainability and predictability.’ In addition, both the Beijing AI Principles, and the National Governance Principles of the New AI, China, call for openness and collaboration, with the latter encouraging ‘cooperation across disciplines, domains, regions, and borders’ (Laskai and Webster 2019). However, nations with different cultures may interpret and prioritize the same principles differently in practice (Whittlestone et al. 2019), which may be a further source of misunderstanding. We cannot simply assume, for example, that ‘Western’ cultures value privacy more highly than ‘Eastern’ ones; instead, we need a more nuanced understanding of how privacy may be prioritized differently when it comes into conflict with other important values, such as security (Capurro 2005). Similarly, although it is important to recognize that many cultures value autonomy, it is equally important to understand the different connotations and philosophical assumptions underpinning this value in different contexts (Yunping 2002).

Given the rich history of misunderstanding between nations, to build greater cross-cultural cooperation, we should start by focusing on identifying those misperceptions most relevant to AI ethics and building greater mutual understanding of where more substantive disagreements exist and are likely to impact governance approaches. In doing so, it is worth distinguishing explicitly between disagreements pertaining to ethics as opposed to governance issues, since it may sometimes be possible for groups to agree on the same governance principles despite justifying them with different ethical assumptions, as we will discuss later. It may also be helpful to distinguish between misunderstandings that pertain directly to AI (such as misperceptions of other countries’ investment in technology, or misinterpretation of data protection laws) and those that pertain to broader societal, political, or philosophical matters that are more indirectly relevant to AI, as they may require different approaches to resolve.

Acknowledging the role of misunderstandings does not, of course, imply that all matters of intercultural tension in AI ethics and governance are fundamentally based on misunderstandings. Deep and fundamental disagreements across regions will remain on a range of issues, including those relating to the relationship between the individual, society, and the state; the level and nature of integration between civil, private, and military sectors; and various specific matters of social policy. However, focusing initially on reducing misunderstandings will aid in establishing more clearly where these fundamental differences exist, while at the same time identifying contexts in which sufficient agreement exists for fruitful cooperation. Doing so is a crucial first step towards addressing the broader challenges of cross-cultural cooperation on AI ethics and governance.

4.2 Finding Ways to Cooperate Despite Disagreements

Even where important differences of view on AI ethics, governance, and broader societal issues exist, forms of agreement and cooperation can still be possible.

As mentioned earlier, a key outstanding challenge for AI ethics and governance is identifying those areas where cross-cultural agreement on norms, standards, or regulation is crucial, and where different interpretations and approaches are acceptable or even desirable. This is precisely the kind of challenge which itself requires cross-cultural cooperation: the delineations must be informed by diverse cultural perspectives on the impacts of AI in different contexts, and the needs and desires of different populations. Indeed, this approach is reflected in the National Governance Principles of the New AI, China, which includes the recommendation to ‘Launch international dialogue and cooperation; with full respect for each country’s principles and practices for AI governance, promote the formation of a broad consensus on an international AI governance framework, standards, and norms’ (Laskai and Webster 2019).

Regional and cultural differences on the abstract level of ethical assumptions and high-level principles are also not necessarily a barrier to agreement on more concrete norms and governance. If it were impossible to reach any practical agreement without consensus on fundamental ethical issues, many important international agreements—such as the Nuclear Weapons Ban Treaty—would not have been possible. The notion of an ‘incompletely theorized agreement’ in legal scholarship (Sunstein 1995), describes how it is often possible for people who disagree on fundamental or abstract matters to nonetheless agree on specific cases—and that this is central to the functioning of law as well as of a pluralistic society more broadly. Several authors in the literature on intercultural information ethics have promoted the related idea of aiming to arrive at an ‘overlapping consensus’ (Rawls 1993), where different groups and cultures may have different reasons for supporting the same norms and practical guidelines (Taylor 1996; Søraker 2006; Hongladarom 2016). For example, Taylor (1996) discusses how we have managed to ground internationally shared norms of human rights in different cultural traditions. While Western philosophies differ substantially from others such as Buddhism in how much importance they give to the human agent and its unique place in the cosmos, both seem to end up grounding the same norms of human rights.

Wong (2009) criticizes this idea that intercultural information ethics can arrive at shared norms with different justifications, suggesting that this risks making norms too ‘thin’, devoid of all normative content. Søraker (2006) acknowledges a similar objection to this ‘pragmatic’ approach to information ethics: that it may result in agreements that are fragile due to not being sufficiently grounded in substantive normative content. However, in line with Søraker’s own response to these objections, we believe that the aim of ‘overlapping consensus’ should be to arrive at shared norms and practical guidelines which are in fact more robust by virtue of being endorsed and justified from a range of different philosophical or normative perspectives. This should be distinguished from a situation where one culture uses pragmatic arguments to attempt to force their own values upon others, or where several cultures reach agreement but for reasons with little normative content, which, we agree with Wong, would be concerning. Taylor’s example of human rights being supported by multiple philosophical perspectives appears to demonstrate the plausibility of this kind of well-substantiated overlapping consensus.

Indeed, we suggest that finding areas of overlapping consensus on norms and practical guidelines may be much more important for ensuring the benefits of AI than aiming for global consensus on a shared set of fundamental values—an aim which underpins many recent proposals.Footnote 24 Consensus on high-level ethical principles does not necessarily mean they are well-justified (Benjamin 1995), and the best way to arrive at more robustly justified norms, standards, and regulation for AI will be to find those that can be supported by a plurality of different value systems.

4.3 Putting Principles into Practice

Even where it is possible to improve mutual understanding and arrive at shared governance approaches in theory, some might object that it will still be difficult to influence the development and use of AI in practice, especially since to do so requires influencing the behaviour of powerful states and companies that have little incentive to cooperate.

Although a full discussion of the complex power dynamics between states, companies, and other actors on issues relevant to AI ethics and governance is beyond the scope of this paper (and worthy of further research), we briefly explain why we do not think this barrier undermines our proposals. While challenging, historical precedent suggests that it is possible for public and academic alliances to influence the behaviour of powerful actors on issues of global importance. There is evidence to suggest that broad, cross-cultural ‘epistemic communities’—i.e. networks of experts in a particular domain—can be particularly effective at supporting international policy coordination (Haas 1992). For example, a community of arms control experts helped shape cooperation between the USA and Russia during the Cold War by creating an internationally shared understanding of the problem of nuclear arms control (Adler 1992), and the ecological epistemic community managed to successfully coordinate national policies to protect the stratospheric ozone layer (Haas 1992).

The commitments of large companies and even nations around AI have already been influenced by combinations of employee activism, international academic research, and campaigning. A notable example is in the application of AI in military contexts. Concerns over the use of AI in warfare have been the subject of high-profile campaigns by experts across academia and civil society internationally, such as those involved in the International Committee for Robot Arms Control (ICRAC) and the Campaign to stop Killer Robots. These campaigns played a leading role in establishing discussion on lethal autonomous weapons (LAWs) at the United Nations Convention on Certain Conventional Weapons (CCW; the body that hosted negotiations over the banning of cluster munitions, blinding laser weapons, and landmines) (Belfield 2020). Ninety countries have put forward statements on LAWs, with most doing so at the CCW; 28 countries support a ban (Campaign to Stop Killer Robots 2018).Footnote 25 In 2018, over 4000 Google employees signed a letter protesting Google’s involvement in the Pentagon’s Project Maven, a military project exploring the use of AI in footage analysis (Shane and Wakabayashi 2018), and a number resigned (Conger 2018). Several other campaigning groups, comprised of scholars and researchers from the USA, Europe, Japan, China, Korea and elsewhere, released public articles and letters supporting the concerns of the Google practitioners (ICRAC 2018).Footnote 26 Google subsequently announced it would not renew its contract on Project Maven, and would not bid on a $10 billion Department of Defence cloud computing contract (Belfield 2020).

More broadly, international academic and civil society input has played a significant role in shaping principles that are likely to form the basis for binding regulation in years to come. For example, the European Commission’s white paper On Artificial Intelligence - A European Approach to Excellence and Trust (European Commission 2020) lays out ‘policy options for a future EU regulatory framework that would determine the types of legal requirements that would apply to relevant actors, with a particular focus on high-risk applications’(European Commission 2020b).This document was strongly influenced by the work of the European Union’s High-Level Expert group on Artificial Intelligence, comprising 52 European experts from across academia, industry, and civil society.Footnote 27 Similarly, the US Department of Defence has formally adopted ethical principles on AI (US Department of Defense 2020), after 15 months of consultation with US-based academic, industry, and government stakeholders, and has hired staff to implement these principles (Barnett 2020). While in both cases the groups consulted were region-specific, the degree of alignment and overlap between principles developed in different regions suggests the insights and recommendations are substantially informed by interaction with broader epistemic communities from other regions. This suggests that insights gained from cross-cultural cooperation and consensus can meaningfully feed into regulatory frameworks at a regional and national level.

5 Recommendations

Academia has an important role to play in supporting cross-cultural cooperation on AI ethics and governance: both through research into where and what kinds of cooperation are most needed, and by establishing initiatives to overcome more practical barriers to cooperation. Our discussion in this paper raises many questions that will require diverse academic expertise to answer, including questions about what important misperceptions most hinder cooperation across regions; where international agreement is needed on AI ethics and governance; and how agreement might be reached on specific governance standards despite differences on ethical issues. Academic communities are also particularly well-suited to building greater mutual understanding between regions and cultures in practice, due to their tradition of free-flowing, international, and intercultural exchange of ideas. Academics can have open conversations with international colleagues in a way that is often challenging for those working in industry or government, and two academics from different parts of the world can productively collaborate even if each has strong criticisms of the other nation’s government and/or companies.

The following recommendations indicate a number of specific steps that academic centres, research institutions, and individual researchers can take to promote cross-cultural understanding and cooperation on AI ethics and governance. Some excellent work is already ongoing in each of these areas. However, we believe that the pace at which AI is being deployed in new domains and regions calls for a greater focus within the academic community on building cross-cultural bridges and incorporating cross-cultural expertise within a wider range of ethics and governance research projects.

Develop AI Ethics and Governance Research Agendas Requiring Cross-cultural Cooperation

Greater cross-cultural collaboration on research projects will play a crucial role in building an international research community that can support international policy cooperation (Haas 1992).

An example of a research project that might be well-suited to such collaboration is to conduct comparative foresight exercises exploring differences in how both positive visions and prominent concerns about AI’s impact on society vary across cultures. This could help with developing a more global vision for what we hope to both achieve and avoid in AI development, which could guide more practical discussions around ethics and governance frameworks. There may be particularly valuable opportunities for consensus generation around avoidance of particular negative outcomes; safety and security are fundamental to human cultures worldwide, and so developing agreements to avoid threats to these may be an easier starting point. However, the authors feel that it is important not to neglect positive visions, as the opportunity for scholars across cultures to co-create shared positive futures may represent an excellent way to delve into nuances within shared values.

It would also be particularly valuable to explore the ongoing and expected impact of AI on developing countries, in collaboration with experts from these countries. Such research should aim to ensure that decisions to deploy technology in developing nations are made with the guidance of local expertise in such a way as to empower local communities (Hagerty and Rubinov 2019). On a more practical level, international research groups could productively work together to develop frameworks for international sharing of research, expertise, and datasets on AI safety, security, and avoidance of societal harm.

Collaboration between researchers from multiple regions and cultures will also be essential to further research on the topic of cross-cultural cooperation itself. Our discussion in this paper, especially in section 4, has pointed to many research areas in need of further exploration, including the following:

  • Exploring, identifying, and challenging perceived cross-cultural differences in values, assumptions, and priorities relevant to AI ethics and governance, on multiple different levels, includingFootnote 28 the following:

    • Analysing similarities and differences in technology ethics across different philosophical traditions, and exploring how these may affect AI development, deployment, impacts, and governance in practice;

    • Exploring the empirical evidence behind claims of key value differences between cultures. For example, a project might identify and explore perceived value differences between Eastern and Western cultures relevant to AI governance, such as those relating to data privacy, the role of the state vs. the individual, and attitudes towards technological progress;

    • Understanding regional differences in practical priorities and constraints relating to the use of AI in society, and the implications of these differences for AI research and development.

  • Further analysis to identify aspects of AI governance where global agreement is needed, and differentiating these from areas in which cross-cultural variation is either acceptable or desirable;

  • Cross-cultural contribution to the development of international and global AI standards in key domains for which these are needed; exploration of flexible governance models that allow for a combination of global standards and regional adaptability where appropriate;

  • Exploring models and approaches for reaching agreement on concrete cases, decisions, or governance standards despite disagreement on more fundamental or abstract ethical issues, and identifying cases of this being done successfully in other domains that can be translated to AI ethics and governance.

Some excellent work is already ongoing in each of these areas. However, we believe that the pace at which AI is being deployed in new domains and regions calls for a greater focus within the academic community on building cross-cultural bridges, and incorporating cross-cultural expertise, within a wider range of ethics and governance research projects.Footnote 29

Translate Key Papers and Reports

Language is a major practical barrier to greater cross-cultural understanding around AI development, governance and ethics, as it has been in many other areas of science (Amano et al. 2016). It would therefore be extremely valuable for the burgeoning literature on AI ethics and governance, as well as the literature on AI research, to be available in multiple languages. While many leading Asian scholars in AI are fluent in English, many are not; and the fraction of Western researchers fluent in Mandarin or Japanese is far lower.

Furthermore, some sources of the misunderstandings we have discussed may link to the ways in which key documents from one region are understood and presented in other regions. In Western media, China’s 2017 New Generation Artificial Intelligence Development Plan has been presented as promoting the aim of global dominance in AI economically and strategically (Knight 2017; Demchak 2019). However, from the Chinese perspective, national AI development goals appear to be primarily motivated by the needs of the Chinese economy and society (China State Council 2017), rather than necessarily international competitive superiority (Ying 2019). It appears that some translations may have led to misinterpretations of key terms and points. For example, the original Chinese text referred to China becoming ‘a primary AI innovation center of the world by 2030’ (Ying 2019).Footnote 30 However, some English translations of the report translated this phrase as China becoming ‘the world’s primary AI innovation center’ (e.g. Webster et al. 2017). This was then interpreted and presented by Eric Schmidt, former executive chairman of Google parent Alphabet, as ‘By 2030 they will dominate the industries of AI. Just stop for a sec. The [Chinese] government said that.’ (Shead 2017). While this evolution of wording is not substantial in one sense, it carries important connotations; the language in the original context carries much softer connotations of leadership and progress, as opposed to global dominance. Having multiple high-quality translations of the most important documents would allow scholars to explore nuances of language and context that may be lost in the reporting of these documents.

Producing high-quality versions of papers and reports in multiple languages also conveys respect and an appetite to engage cross-culturally, which is likely to encourage cooperation. High-quality translation of academic and policy materials is a challenging and time-consuming task that we would encourage being supported and celebrated more strongly. There is a growing body of work to be celebrated in this vein; for example, Jeff Ding’s translation of a range of key Chinese AI documents (Ding 2019), the work of Intellisia in China on international relations, technology, and other topics, which publishes in 5 languages (http://www.intellisia.org/); Brian Tse’s translation to Chinese of documents including OpenAI’s CharterFootnote 31; and New America’s translation of the Chinese Ministry of Industry and Information Technology’s Three Year Action Plan (Triolo et al. 2018).

Alternate Continents for Major AI Research Conferences and Ethics and Governance Conferences

To encourage more global participation in AI development, ethics, and governance, we recommend that many of the leading conferences and fora on these topics alternate between multiple continents. This has several advantages. It reduces the cost and time commitment for scholars from parts of the world in which these conferences do not frequently take place to participate. It avoids restrictive visa limitations differentially affecting certain parts of the global research community. It encourages the involvement of local organizers, who can play an effective role in engaging local research communities who might not consider travelling far overseas for an event. It also encourages organizers to run events multilingually rather than monolingually.

Again, there are encouraging steps. Of AI research conferences, IJCAI took place in Macau in 2019 and Beijing in 2013, the first two times the conference had been held in China (although it has been held in Japan twice). ICML took place in Beijing in 2014, and will be in Seoul in 2021, and ICLR 2020 will be held in Ethiopia, making it the first of the top tier major machine learning conferences to be held in Africa. There are fewer established conferences explicitly focused on AI ethics and governance since the field’s growth is relatively recent, but it may be particularly important for these conferences to ensure a global presence by alternating the continent on which they are held if possible. AI Ethics and Society, for example, is currently held in the USA due to ties to AAAI; the importance of building an international academic community around these issues may justify finding some way to change this. There is a burgeoning set of AI ethics and governance conferences in China, including the Beijing Academy of AI Conference series. There are also several existing conferences which cover topics relevant to AI ethics and governance (even if not so explicitly centred around them), which do enable more international participation, such as the World Summit on the Information Society Forum (held most years in Geneva), the Internet Governance Forum, and RightsCon (which have both been held in a range of locations historically including in South America, India, and Africa, though neither in East Asia).Footnote 32

Establish Joint and/or Exchange Programmes for PhD Students and Postdocs

Encouraging cross-cultural collaboration between researchers from different cultures early on in their careers will help support greater cooperation and mutual understanding as research advances. Many international fellowships and exchange programmes exist, especially between the USA and China (e.g. the Zhi-Xing China Fellowship and the Schwarzman Scholars programme) as well as between the UK and Singapore (King’s College London offers a joint PhD programme in Philosophy or English with the National University of Singapore). To our knowledge, no such initiatives exist explicitly focused on AI; the only initiative focused on AI ethics and governance that we are currently aware of is the international fellowship programme recently established by the Berggruen China Institute (Bauch 2019).Footnote 33 Establishing more such programmes could be valuable for the future of international cooperation around AI, and there are many existing models and initiatives from which to build and learn.

More broadly, the authors endorse the Partnership on AI’s recommendations to governments on establishing visa pathways, simplifying and expediting visa processes, and ensuring just standards to support international exchange and collaboration of AI/ML multidisciplinary experts. We emphasize that these recommendations include experts working or seeking to work on AI ethics and governance (which sometimes fall outside of what is classed as ‘skilled technology work’) (PAI Staff 2019).

6 Limitations and Future Directions

We believe that academia has an important role to play in supporting cross-cultural cooperation in AI ethics and governance: that it is possible to establish effective communities of mutual understanding and cooperation without needing to resolve all fundamental value differences, and that reducing misunderstandings and misperceptions between cultures may be of particular importance.

However, we recognize that the suggestions in this paper cannot go all the way to overcoming the many barriers to cross-cultural cooperation, and that much more work needs to be done to ensure AI will be globally beneficial. We briefly highlight two broad avenues of further research in support of this goal:

More Detailed Analysis of Barriers to Cross-cultural Cooperation, Especially Those Relating to Power Dynamics and Political Tensions

While analysis of historical successes suggest that it is possible for cross-cultural initiatives around AI ethics and governance to considerably shape how norms, standards, and regulation evolve in practice, there are still many barriers to implementation and enforcement that we were unable to consider in this analysis. Further research into when and how attempts to influence globally relevant norms and regulation have been successful in the past would be of considerable value.

We acknowledged earlier in this paper that various issues related to power relations and political tensions likely pose significant barriers to cross-cultural cooperation, beyond problems of value differences and misunderstandings between cultures. More research on how these issues present barriers to cross-cultural cooperation in AI ethics and governance would therefore be particularly valuable, helping us to understand the limits of academic initiatives in promoting cooperation, and in what ways these approaches need to be embedded within a broader analysis of power and political dynamics.

Considering the Unique Challenges of Cross-cultural Cooperation Around More Powerful Future AI Systems

Future advances in AI, which some scholars have theorized could have impacts as transformative as the industrial or agricultural revolutions (Karnofsky 2016; Zhang and Dafoe 2019), may raise new challenges for global cooperation of a greater scale than we already face. Without careful global stewardship, such advances could lead to unprecedented inequalities in wealth and power between technology-leading and lagging nations. Others have gone further, theorizing about the possibility of developing systems exhibiting superintelligence (i.e. greater-than-human general intelligence; Bostrom 2014). Such systems, due to their tremendous capability, might pose catastrophic risks to human civilisation if developed without careful forethought and attention to safety. It has been proposed that a key challenge for avoiding catastrophic outcomes will be value alignment—designing systems that are aligned with humanity’s values (Russell 2019). This would greatly increase the importance and urgency of reaching global consensus on shared values and principles, as well as finding ways to design systems to respect values that are not shared.

Expert views differ widely on how far in the future such advances might lie, with most predicting decades. However, developing the collaborations and agreements necessary for an effective and coordinated response may also require decades of work. This suggests that cooperative initiatives today must address not just the ethics and governance challenges of current AI systems, but should also lay the groundwork for anticipating and engaging with future challenges.

7 Conclusion

The full benefits of AI cannot be realized across global societies without a deep level of cooperation—across domains, disciplines, nations, and cultures. The current unease and mistrust between the USA and Europe on the one hand, and China on the other hand, places a particular strain on this. Misunderstandings may play an important role in fuelling this mistrust, and differences in broader societal and political priorities frequently appear to be overemphasized or misunderstood.

At the same time, it would be naive to assume that all major ethical principles relating to AI can be shared in full between these regions, and can be enshrined in rules and standards. Even if this were the case, it would not be desirable for these regions to be overly dominant in shaping the ethics and governance of AI globally; all global communities to be affected by AI must be included and empowered. However, efforts to achieve greater understanding between these ‘AI superpowers’ may help in two ways: Firstly, by reducing key tensions within the global AI governance sphere. Secondly, by providing lessons that can contribute to ethics and governance frameworks capable of supporting a greater diversity of values while allowing consensus to be achieved where needed. For a well-functioning system of global cooperation in AI, the challenge will be to develop models that combine both principles and standards shaped and supported by global consensus, and the variation that allows research and policy communities to best serve the needs of their societies.

On a more practical level, the international AI research community, and AI ethics and governance communities, must think carefully about how their own activities can support global cooperation, and a better understanding of different societal perspectives and needs across regions. Greater cross-cultural research collaboration and exchange, conferences taking place in different regions, and more publication across languages can lower the barriers to cooperation and to understanding different perspectives and shared goals. With political winds increasingly favouring isolationism, it has never been more important for the research community to work across national and cultural divides towards globally beneficial AI.