1 Introduction

Over the past few years, even before the rise of foundation models, artificial intelligence (AI) governance has been receiving more attention worldwide. The sudden public releases of ChatGPT and other tools based on foundation models triggered public fascination, fear, and accompanying calls for regulation, including for international governance (Roberts, 2023). While domestic AI regulation is necessary to ensure that the risks and benefits of AI do not disproportionately fall on specific groups, global AI governance—in the form of an international governing body, treaty, or other mechanism—may be necessary to ensure a broad, equitable distribution benefits and harms. Thus, this article focuses on global governance as a specific form of productive cooperation aiming to maximize the benefits and minimize the risks of AI (Finkelstein, 1995; Weiss, 2000), with an equitable distribution of both.

Previous work has examined AI governance efforts in the US and China (Roberts et al., 2021) compared AI governance in the US and EU, while (Roberts et al., 2022) did the same for China and the EU. Zhang (2024) and Roberts et al. (2021) have dug more specifically into Chinese AI governance, while Sheehan (2023) has examined how Chinese AI regulations are made (Hine & Floridi, 2022a) compared AI governance in the US and China, but new developments are bringing their visions of a “Good AI Society” (Roberts et al., 2021; Cath et al., 2018), the values underlying an AI-powered society, into sharper relief, necessitating further analysis. Central to each country’s efforts are measures to balance innovation and regulation, while also promoting domestic values more broadly. Considering these three factors, this paper will examine recent governmental AI governance efforts in the US and China,Footnote 1 analyzing official governance documents from each, plus international documents that show how each Good AI Society is evolving in the global context. Documents were selected to show the overall trajectory of regulation; this paper is not a comprehensive overview of every policy document. For China, this is primarily new AI-related legislation, plus updated funding and local documents revealing shifts in where AI governance is concentrated. For the US, this mostly comprises official documents issued by the executive branch and national institutions. The paper will consider the current competitive dynamic and examine if any of the previously identified factors that might aid cooperation are still valid.

Thus far, the US has focused on voluntary frameworks but is moving towards concrete legislation, while China is increasingly centralizing its efforts. It has issued new ethics documents, passed regulations on algorithms and “synthetic content,” and drafted regulations on generative AI and facial recognition. Amidst these regulatory developments, relations between the countries have deteriorated. The US and China have long been geopolitical rivals, but since the late 2000s, “the primacy of competition has become a core feature of the US-China relationship,” fueled by major economic, technological, and governance differences (Medeiros, 2019). Currently, geopolitical tensions around China’s relations with Taiwan (Wang, 2023) and Russia (Talley & DeBarros, 2023), as well as economic conflict via US sanctions on China’s semiconductor industry (US Department of Commerce, 2022) and the ongoing trade war (Schuman, 2024), and espionage concerns including alleged surveillance by TikTok (Magnier, 2023) and the “spy balloon” seen over the US in early 2023 (Cooper et al., 2023) have strained diplomatic relations and intensified competitive rhetoric (Davis, 2023), including around AI. Furthermore, the US is ratcheting up initiatives around AI with “democratic values,” which may preclude cooperation with China. However, China is at least paying lip service to the idea of international cooperation and ethical pluralism, putting the onus on the US to respond. While these competitive dynamics may make cooperation on AI governance less likely, I reject the claim that the two countries’ values are fundamentally misaligned. Based on a philosophy-of-technology level of abstraction drawn from (Hine & Floridi, 2022a)—which provides a broader grounding and a framework by which to analyze the trajectory of development, rather than just a snapshot of current dynamics—I argue that the philosophies underlying each country’s development can still lay the groundwork for agreement on global AI governance. However, I conclude with a note of caution by contextualizing the philosophical discussion in geopolitical realities.

The article will proceed as follows. Section 2 analyzes the US’s regulatory developments and what it means for its Good AI Society; Section 3 does the same for China. Section 4 compares the two from a philosophy-of-technology level of abstraction and discusses implications for global AI governance. Section 5 concludes.

2 United States

Previous work has described how the US Good AI Society evolved through the Obama, Trump, and Biden administrations (Hine & Floridi, 2022a). Obama established a diversity- and ethics-forward vision that was largely discarded by the Trump administration which, after a period of neglect, initiated policies to promote AI development in line with nebulous “American values.” This included minimizing regulation, maximizing innovation, and competing with China, as well as introducing the idea of “trustworthy” AI that prioritized trust for the sake of spreading AI, rather than upholding fundamental rights. The start of the Biden administration brought back elements of the Obama administration’s emphasis on diversity and ethics while escalating competition with China (Hine & Floridi, 2022a). Now, significant new developments have filled in some of the details of Biden’s Good AI Society. Innovation and the promotion of American values have been consistent themes, but now regulatory efforts are increasing and attempting to support the Biden administration’s community-centered and fundamental rights-based vision of AI, centering trustworthy AI in a way more in line with international definitions. This reflects the internationalization of a vision that, while still distinctly American, is relying more on and being influenced by allies.

The Biden administration has issued two rounds of AI regulation. Leading the first was the White House Office of Science and Technology Policy’s “Blueprint for an AI Bill of Rights” (the Blueprint), released in October of 2022. The initial proposal, in October of 2021, called for a “bill of rights for an AI-powered world” and discussed how it might be enforced (Lander & Nelson, 2021). The final product, as implied by the title “Blueprint,” is a guide to five principles that should inform AI development; it is not enforceable. However, it is a promising vision of community-centered, AI-enhanced social equity, outlining five principles that AI should follow and why they are important ([anonzed], Hine & Floridi, 2023). It is notable among similar documents in that it prioritizes communities, which are often overlooked when assessing AI impacts that manifest at individual or societal scales, and aims to protect Americans from abusive data practices, unsafe and nontransparent systems, and algorithmic discrimination (White House Office of Science and Technology Policy, 2022). Its acknowledgment of systemic biases in American society that feed into AI systems and propagate is notable and reflects the Biden administration’s larger focus on racial and social equity. Accompanying this is Executive Order (EO) 14091 on “Further Advancing Racial Equity and Support for Underserved Communities Through the Federal Government,” which aims to advance “equity for all” and address systemic racism (Executive Office of the President, 2023). Part of this is by “root[ing] out bias in the design and use of new technologies, such as artificial intelligence” and by using AI to “advance equity” (Executive Office of the President, 2023).

The Biden administration’s other documents from this first wave also support its community orientation and equity focus. Released shortly after the Blueprint were the National Institute of Standards and Technology (NIST) AI Risk Management Framework 1.0 (RMF) and National Artificial Intelligence Research Resource (NAIRR) Task Force Implementation Plan. The NAIRR would support communities underrepresented in AI by creating a shared research infrastructure to increase diversity and expand access to computational resources, data, and educational tools, which in turn will facilitate development and US leadership (NAIRR Task Force, 2023). The RMF is predicated on the idea that diverse teams are necessary for better risk management and echoes the Blueprint’s focus on community impacts (NIST, 2023). In keeping with the voluntary nature of the Blueprint and other early US regulatory initiatives, the RMF is not mandatory.Footnote 2 Initially, the NAIRR’s funding status was uncertain, but in January of 2024, the National Science Foundation (NSF) launched a pilot (NSF, 2024). This was mandated by the next piece of regulation we will discuss.

The Biden administration’s flagship document is the Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which in October of 2023 issued several regulations, mostly for the federal government. This has been described as a “whole-of-government” effort (Zhang, 2024) that requires government agencies to develop AI strategies and best practices, mandates a number of reports and committees, and creates significant AI talent and recruitment programs. Companies are required to disclose when they are training foundation models above a certain flop threshold and share information about their red-teaming efforts.

Though much of EO 14110 focuses on AI safety—it was issued in the run-up to the UK’s AI Safety Summit—concern for countering discrimination, bias, and other injustices is woven throughout. An entire section addresses “Advancing Equity and Civil Rights,” calling on various agencies to tackle algorithmic discrimination and analyze how AI is being used in criminal justice, government benefit programs, hiring, housing, and healthcare, with the goal of protecting civil rights. It continues the community orientation of the Blueprint by including measures to engage with vulnerable communities and invest in training and recruitment for underserved communities. At the same time, it contains numerous references to innovation—framed as “responsible innovation” in the “Policy and Principles” section—including an entire section on “Promoting Innovation and Competition” that aims to attract AI talent; strengthen public-private partnerships and increase competition across industries; and support workers in using AI, coping with job disruption, and dealing with potential AI abuses. Overall, the US’s Good AI Society involves the executive branch and national agencies promoting domestic values and government innovation, as well as private-sector innovation through light-touch regulations.

The Biden administration is working closely with American Big Tech companies to support innovation and self-regulation. The National Artificial Intelligence R&D Strategic Plan already had “Expand Public-Private Partnerships to Accelerate Advances in AI” as a strategy, but the 2023 update expands it significantly. Furthermore, Big Tech companies have been represented at congressional hearings (“Oversight of A.I.: Rules for Artificial Intelligence” 2023; “Oversight of A.I.: Principles for Regulation”, 2023) and White House summits, including with the CEOs of Google, Anthropic, Microsoft, and OpenAI (The White House, 2023c). This approach led 15 technology companies to commit to a set of voluntary risk management principles involving safety, security, watermarking, and risk research (The White House, 2023a, 2023b). These principles are relatively vague and overlap with actions these companies are likely taking anyway (Harvard Law Review, 2024)—for instance, it is in their best interest to ensure that their systems are secure—and show the Biden administration’s commitment to working hand-in-hand with industry actors to preserve friendly relations with the Big Tech companies driving AI innovation. It is also notable that EO 14110, while it invokes the Defense Production Act to compel action from the private sector, only does so to mandate reporting of model development and red-teaming efforts for foundation models that exceed the current state-of-the-art; it does not issue restrictions on their innovative ability or otherwise pose a substantial burden.

Despite a flurry of executive and agency action, legislative action has been slower to materialize.Footnote 3 Several proposals, including the SAFE Innovation Framework, the Blumenthal-Hawley Framework, and the National AI Commission Act seek to outline broad approaches to AI, and a series of proposed bills would address specific problems in AI, like the generation of deepfake pornography (Covington Alert, 2023). Fully examining these proposals is outside the scope of this article, especially since which—if any—will pass is difficult to predict. However, what is notable is that regulation is now being painted as a way to promote American leadership and values in the face of China’s rise, transferring domestic Good AI Society values to an international context. In June of 2023, Senate Majority Leader Chuck Schumer, a Democrat, proposed the SAFE Innovation Framework for AI Policy, intended to promote policy action, and called for a new process to develop policies to implement the framework. He specifically calls out racial equity as a priority (Schumer et al., 2023), echoing a key focus of the Biden administration. However, his larger focus is on innovation and countering China, which also underpins the NAIRR Implementation Plan and the RMF. SAFE stands for security, accountability, “protecting our foundations,” and explainability. In his speech, Schumer echoed two of the themes seen throughout US AI policy: innovation and countering China. Schumer emphasized that “innovation must be our North Star” and should encourage, not stifle innovation. If the US lags, then “The Chinese Communist Party, which has little regard for the norms of democratic governance, could leap ahead of us and set the rules of the game for AI. Democracy could enter an era of steep decline” (Schumer et al., 2023). The fear of China “pulling ahead” or “winning the AI race” has been used to lobby against regulation (Naughton, 2023)—and expressed by OpenAI CEO Sam Altman in a Congressional hearing (“Oversight of A.I.: Rules for Artificial Intelligence”, 2023)—but is now turning into as motivation to pass regulation that promotes values-aligned AI. In a US House of Representatives hearing called “Artificial Intelligence: Advancing Innovation Towards the National Interest,” Republican Representative Frank Lucas said, “embedding our values in AI technology development is central to our economic competitiveness and national security,” and later that “We cannot and should not try to copy China’s playbook, but we can maintain our leadership role in AI and we can ensure its development with our values of trustworthiness, fairness, and transparency” (“Artificial Intelligence: Advancing Innovation in the National Interest”, 2023). Although American politics are currently highly partisan, the need to counter China and address the threats of AI is a unifying motivator, bringing regulation and responsibility into the innovation-focused conversation.

Anti-China rhetoric in relation to AI arose during the Trump administration (Hine & Floridi, 2022a). The Biden administration has maintained the Trump administration’s desire to compete with China, but it has discarded or substantially modified other aspects of its approach to governance. Less than 40% of requirements in the AI in Government Act of 2020 (passed in the 2021 Consolidated Appropriations Act), EO 13859 on Maintaining American Leadership in Artificial Intelligence, and EO 13960 on Promoting the use of Trustworthy Artificial Intelligence in the Federal Government have been verifiably implemented, and none of the AI in Government Act’s requirements with a deadline were implemented (Lawrence et al., 2022). EO 13859, which established the “American AI Initiative” and required agencies to create AI regulation plans, was also largely ignored. Only five of 41 assessed agencies had published such a plan as of December 2022 (one with “None” written in every section), and the Office of Management and Budget (OMB) was 16 months late with its required guidance memo (Lawrence et al., 2022). These requirements are clearly not being prioritized. However, Biden’s EO 14110 contains a number of requirements with due dates, which the administration indicates are on track (Alder, 2024).

In terms of values, the Blueprint is the most comprehensive outline of a Good AI Society that a US administration has issued to date. In particular, it begins to define values like “trustworthiness,” which has been referenced in international governance principles, making it important to concretely define. While the Trump administration’s EO 13960 introduced the idea of “trustworthy” AI to the US Good AI Society, it did not explicitly define “trustworthy” (which appeared only in the title) but laid out nine principles for AI. References to “trust” within the EO outlined how AI should be “used in a manner that fosters public trust” for wide adoption (Executive Office of the President, 2020), which contrasts with the Organisation for Economic Co-operation and Development’s (OECD) values-based AI principles designed to “promote use of AI that is innovative and trustworthy and that respects human rights and democratic values” (OECD, 2019). These include:

  1. 1.

    Inclusive growth, sustainable development, and well-being

  2. 2.

    Human-centered values and fairness

  3. 3.

    Transparency and explainability

  4. 4.

    Robustness, security, and safety

  5. 5.

    Accountability

The RMF lays out characteristics of trustworthy AI systems that are quite similar. “Valid and reliable” is the foundation of trustworthiness. The other characteristics are:

  1. 1.

    Safe

  2. 2.

    Secure and resilient

  3. 3.

    Explainable and interpretable

  4. 4.

    Privacy-enhanced

  5. 5.

    Fair with harmful bias managed

Finally, “accountable and transparent” relates to all other characteristics (NIST, 2023). Of the OECD principles, all are covered except for “human-centered values” (which is often seen in European AI regulations and seem to be replaced by “American” and “democratic values” in US plans) and “inclusive growth, sustainable development, and well-being,” but it overall seems compatible with the OECD principles. However, the RMF puts “People and Planet” at the core of the AI lifecycle and references sustainability throughout; economic growth and well-being are conceived as a result of trustworthy AI rather than a precursor (NIST, 2023). Of the Biden releases from the last few years, it is the most promising of the documents on the environment, which the Blueprint and EO 14,110 scarcely mention (Hine & Floridi, 2023).

The NAIRR report also centers people in that it is about increasing diversity in AI development. However, it fails to define trustworthiness and only addresses the environment in a one-page aside about electronics waste and recommendations that the US “work towards” carbon-neutral computing technology and consider evaluating energy efficiency and environmental sustainability of potential resource providers. What it emphasizes more is competition with China, which it says “threatens American dominance” in AI (NAIRR Task Force, 2023). Indeed, this fear underlies much of the Biden era, including the passage of the CHIPS and Science Act, which intends to defensively bolster the American semiconductor industry and reduce reliance on China (The White House, 2022), and its accompanying export controls and sanctions intended to actively hobble the Chinese chip industry (Hine & Floridi, 2022a). These had already been escalated from the Trump to the Biden administration, with further controls added in October of 2023 (Bureau of Industry and Security, 2023). The Biden administration’s early AI documents emphasized working with allies (Hine & Floridi, 2022a), and while many of its flagship documents are primarily inward-facing to support digital sovereignty, it led negotiations with Japan and the Netherlands to limit the export of chip manufacturing equipment to China (Hayashi & Salama, 2023). These go hand-in-hand with pressure on allies to limit China’s role in 5G infrastructure and is a form of “digital expansionism,” which is the exportation of digital governance values through a variety of mechanisms (Roberts et al., 2023). While other countries have their own motives for countering China, the US is going beyond security concerns to ideological ones, painting itself as the leader of a coalition with fundamentally different values.

The US is not only recruiting allies to increase pressure on China, but is building a values-aligned AI coalition through groups like the EU-US Trade and Technology Council (TTC). In December of 2022, the TTC announced a Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management (the Roadmap) (US-EU TTC, 2022), and in partial fulfillment of the Roadmap’s short-term objectives, in May of 2023, the TTC released a shared taxonomy of AI-related terms and mapped how the EU and US are involved in standardization activities (“AI Policy Contributions”, 2021). The taxonomy compares EU and US definitions of terms and derives a shared final definition. The terms include technical ones like “adaptive learning” but also governance-focused ones like “human-centric AI”Footnote 4 and “trustworthy AI” (US-EU TTC, 2023). The US definition of trustworthy AI is drawn from the RMF, while the EU definition is from the work of the EU High-Level Expert Group on AI (HLEG) and states that AI should be lawful, ethical, and robust (AI HLEG, 2019). The final definition combines those components with the five characteristics outlined by the RMF, creating a shared foundation for further cooperation and solidifying the EU and US as a unified bloc promoting AI “based on shared democratic values including the respect for the rule of law and human rights” (US-EU TTC, 2023).

The US is also working with groups like the G7 and the OECD-hosted Global Partnership on AI (GPAI) to advance AI governance based on “shared democratic values,” which in a G7 Communiqué include “fairness, accountability, transparency, safety, protection from online harassment, hate and abuse and respect for privacy and human rights, fundamental freedoms and the protection of personal data” (“G7 Hiroshima Leaders’ Communiqué”, 2023). The varying definitions of “democratic values” are redolent of “American values,” which were frequently invoked under the Trump administration but varied from document to document (Hine & Floridi, 2022a). Still, the US’s new Good AI Society includes firm alliances, meaning that a more inclusive rallying cry is necessary. The need for allyship is reiterated in the 2023 Update to the National Artificial Intelligence Research and Development (R&D) Strategic Plan’s addition of a ninth strategy, “Establish a Principled and Coordinated Approach to International Collaboration in AI Research,” which emphasizes partnerships with “countries that share [the US’s] core values,” providing “opportunities to… showcase U.S. leadership” (Select Committee on Artificial Intelligence of the National Science and Technology Council, 2023). Even when collaborating with allies, the US still wants to be the leader, but in a more equal partnership than before.

The Biden administration’s Good AI Society still foregrounds innovation, but not at all costs. Its faith in the American firm includes a belief that the US can innovate while prioritizing trustworthiness and fundamental values. One way it wants to do this is by increasing the diversity of AI development so that it will better serve the American people and foster social equity. As before, the benefits of AI in this vision are limited to the US and its allies, but the US is more adaptable now, willing to compromise rather than expecting allies to subject themselves to the US vision (Hine & Floridi, 2022a). To achieve this, the US will work with other governments and private companies. However, tangible regulation is not guaranteed, and continuing to rely on private companies to self-regulate, even with additional guidance, may jeopardize the vision of inclusive and non-biased AI.

3 China

After the release of high-level guiding documents and ethics principles, the past several years of AI governance in China have seen a proliferation of hard laws and ethics principle-sets that show that the Chinese Good AI Society is a stable yet innovative one intending to lead globally. China began its AI regulation with an attitude of cautious innovation and used “fragmented authoritarianism” to delegate AI development to local governments and see what approaches bubbled to the top while explicitly aiming for global leadership, including in development, standards, and ethics (Hine & Floridi, 2022a). It was primarily concerned with promoting careful development, but concrete regulation began with a draft regulation on recommendation algorithms in August of 2021. The final Provisions on the Management of Algorithmic Recommendations in Internet Information Services regulate several kinds of algorithms (Ministry of Industry and Information Technology, 2021b). Since then, regulations on “deep synthesis” and generative AI have also been drafted and finalized across several bodies (Cyberspace Administration of China, 2023c; Ministry of Industry and Information Technology, 2021a). A “trial” draft law on facial recognition was also released (Cyberspace Administration of China, 2023a), which would prohibit the use of facial recognition without consent and in some private spaces, but contains large carveouts for national security that will enable the government to continue to use invasive, ethnicity-detecting facial recognition in, for example, Xinjiang, where the Uighur Muslim population is subject to AI-powered surveillance and oppression (Human Rights Watch, 2021). The “deep synthesis” regulations include generated text, audio, video, and images, but are primarily aimed at “deepfakes” (Hine & Floridi, 2022b). The deep synthesis regulations were finalized on November 25 of 2022, five days before the release of ChatGPT (Ministry of Industry and Information Technology, 2021a).

To understand the regulatory scramble that resulted, it is important to understand that social stability is one of the primary goals of the governing Chinese Communist Party (CCP). “Long-term social stability” is lauded as one of the “miracles” of the CCP, with the other being economic development (Xinhua, 2021), an explicit acknowledgment of the innovation-regulation balancing act at the core of the Chinese Good AI Society. Social stability is referenced throughout AI governance documents because AI is a disruptive technology (Hine & Floridi, 2022a). Generative AI is one of the most direct threats to the CCP’s control over the Internet, which is why all three of the algorithm regulations, deep synthesis regulations, and generative AI regulations require the labeling of generated content, and the algorithm regulations and deep synthesis regulations also prohibit the generation and dissemination of “fake news and information” (虚假新闻信息, xujia xinwen xinxi) (Art. 13; Art. 6); the generative AI measures prohibit “fake and harmful information” (虚假有害信息, xujia youshan xinxi) (Art. 4(1)).Footnote 5

Despite the clear focus on stability, there are signs that innovation is becoming a stronger motivator. The original draft generative AI regulations stipulated that “content created by generative AI shall be true and accurate, and measures are to be employed to prevent the generation of fake information” (Daum, 2023), but this is absent in the final version. This loosening acknowledges that guaranteeing that probabilistically generated content be true is effectively impossible and concedes the need to not ban generative systems. Indeed, the final version of the law says that the state shall place “equal emphasis on development and security” (China Law Translate, 2023; Cyberspace Administration of China, 2023c) (Art. 3), showing that the government is acknowledging that some disruption is inherent to staying competitive in AI and is trying to preserve both of its “miracles.” This is part of a pro-growth approach that includes limited enforcement of the laws on the books (Zhang, 2024), promoting development but leaving room to crack down if necessary.

With centralized government regulation has come more centralized projects and funding and a decrease in local focus. In the wake of 2017’s “New Generation AI Development Plan” (AIDP), which laid out the national government’s goals for China’s AI sector, at least 28 local governments issued development plans outlining how they would encourage AI development to support national goals (Hine & Floridi, 2022a), many of which have expired without replacement, implying that they may have been lip service to party aims. Some areas have now created more specific plans, like Beijing’s three-year plan to create an “AI innovation center” (People’s Government of Beijing, 2023). Others have passed regulations to promote development, including Shenzhen and Shanghai (Wu, 2022), but centralized development projects also play a role. Provinces are competing for national projects, such as the “National New Generation AI Public Computing Power Open Innovation Platform” [国家新一代人工智能公共算力开放创新平台, guojia xin yidai rengong zhineng gongong suanli kaifang chaungxin pingtai] (Hubei Science and Technology Office, 2023). “Open innovation platforms” are part of the 2017 New Generation AI Development Plan, which laid out China’s intention to become a world leader in AI by 2030 (State Council, 2017). Some, like the “Wisdom Net” [智慧网络, zhihui wangluo] are being built by industry actors being guided by public bodies (in this case China Mobile and the Ministry of Science and Technology) (C114 Tongxinwang, 2022), but others, like the public computing platform, are delegated to provincial labs but still with centralized guidance. While these factors might indicate an end to the “fragmented authoritarianism” era, it is more likely a readjustment after the initial hype, as there is still legislative action on the local level, although strong centralized regulation might undermine local governance (Zhang, 2024). The primary change from before is that smaller provinces that seemed unlikely to become AI development powerhouses are dropping out, and development is concentrating in established technology hubs.

This is reinforced by national AI funding allocation. Akin to the US NSF, the National Natural Sciences Foundation of China (NSFC) funds projects of various sizes. They release annual funding statistics for different scientific divisions. According to the English versions of the NSFC Project Guides, between 38 and 45 AI projects were funded each year between 2018 and 2020Footnote 6 for a total of between 17.24 and 22.50 million RMB ($2.37-$3.01 million) (National Natural Science Foundation of China, 2019b, 2020b). However, looking at the Chinese versions of the project guides reveals discrepancies. The numbers attributed to “systems science and system engineering,” which appear above “artificial intelligence and intelligent systems” in the English tables, are what was assigned to “artificial intelligence” [人工智能, rengong zhineng] in the Chinese versions. What was claimed for “artificial intelligence and intelligent systems” in the English versions went to “education information science and technology” [教育信息科学与技术, jiaoyu xinxi kexue yu jishu] in the Chinese versions, which is only broken out as a category in the English version in the 2022 version (National Natural Science Foundation of China, 2022c). The numbers do match up in the 2018 report (which provides data for 2016 and 2017) (National Natural Science Foundation of China, 2018b). Based on their magnitude, it appears that the data provided in the Chinese versions is correct. Those numbers are a factor of ten larger and paint a dramatically different picture, with the number of projects funded (in the hundreds, not dozens) continually increasing from 2016 through 2023, with a particularly steep increase between 2017 and 2018, reflecting new incentives after China’s first wave of AI governance began in 2017 (Hine & Floridi, 2022a). However, as shown in Table 1, funding peaked in 2021 at 157.1 million RMB ($21.57 million) and has fallen below 2019 levels (146.04 million RMB ($20.29 million) in 2023 versus 153.32 million RMB ($21.30 million) in 2019) (National Natural Science Foundation of China, 2018a, 2019a, 2020a, 2021a, 2022a). Thus, provinces are competing for at best static and at worst dwindling resources, potentially a result of US chip sanctions (Strumpf & Leong, 2023). However, as China initiates digital sovereignty measures to boost domestic chip production, this may change (Roberts et al., 2023; Zhu, 2022).

Table 1 NSFC AI project funding for general projects [面上项目, mianshang xiangmu], 2018–2023

While funding is still heavily concentrated in wealthy areas, since 2020 when numbers for the “Fund for Less Developed Regions”Footnote 7 began to be broken out, the number of projects and funding amount for these regions have both slightly increased: from 49 projects for 17.6 million RMB ($2.42 million) to 61 projects and 19.5 million RMB ($2.71 million) (as seen in Table 2), which, together with the decreased acceptance rates, indicates an increase in project applications like in Table 1. Still, funding is much less than in wealthy areas on an absolute and per-project basis.

Table 2 NSFC less developed region project funding in AI, 2020–2022

Many of the “Less Developed Regions” lack AI plans and have not had significant developments, but Hainan has established a Big Data Administration (Hainan Big Data Industry Alliance, 2023), Guizhou is encouraging cities to apply for Ministry of Science and Technology application scenario funding (Guizhou Science and Technology Department, 2022), and Jiangxi issued a 2023–2025 Big Data action plan (Nanchang County People’s Government, 2023). However, they seem to have been left out of the central government’s major development initiatives. Of the nine cities selected to build the “National New Generation AI Public Computing Power Open Innovation Platform,” identified through press releases, none are in a “Less Developed Region.” These areas may not have the development infrastructure for such ambitious projects, but are an under-tapped resource that may provide development headroom in the future. Still, development in China’s Good AI Society is both top-down and bottom-up, albeit led by established technology hubs.

China is also trying to become a world leader in AI ethics, and its ethical principles are shifting to facilitate that. Guiding documents from 2019 were the Beijing AI Principles [人工智能北京共识, rengong zhineng Beijing gongzhi], created by the coalition Beijing Academy of Artificial Intelligence (BAAI) and the New GenerationFootnote 8 Artificial Intelligence Governance Principles: Developing Responsible Artificial Intelligence [新一代人工智能治理原则——发展负责任的人工智能, xin yidai rengong zhineng zhili yuanze—fazhan fuze de rengong zhineng] (Governance Principles), created by a Ministry of Science and Technology expert group. In September of 2021, that same group released the Ethical Norms for New Generation Artificial Intelligence [新一代人工智能伦理规范, xin yidai rengong zhineng lunli guifan] (Ethical Norms). The Governance Principles lay out eight principles for responsible AI (Ministry of Science and Technology, 2019; Webster & Laskai, 2019):

  1. 1.

    Harmony and friendliness [和谐友好, hexie youhao]

  2. 2.

    Fairness and justice [公平公正, gongping gongzheng]

  3. 3.

    Tolerance/inclusion and sharing [包容共享, baorong gongxiang]

  4. 4.

    Respect privacy [尊重隐私, zunzhong yinsi]

  5. 5.

    Safe and controllable [安全可控, anquan kekong]

  6. 6.

    Shared responsibilities [共担责任, gong dan zeren]

  7. 7.

    Openness and collaboration [开放协作, kaifang xiezuo]

  8. 8.

    Agile governance [敏捷治理, minjie zhili]

The Ethical Norms include six ethical requirements and 18 specifications across management, R&D, supply, and use. The ethical requirements are (Ministry of Science and Technology, 2021; Murphy & Group, 2021):

  1. 1.

    Enhance human well-being [增进人类福祉, zengjin renlei fuzhi]

  2. 2.

    Promote fairness and justice [促进公平公正, cujin gongping gongzheng]

  3. 3.

    Protect privacy and safety [保护隐私安全, baohu yinsi anquan]

  4. 4.

    Ensure controllability and trustworthiness [确保可控可信, quebao ke kong kexin]

  5. 5.

    Strengthen responsibility [强化责任担当, qianghua zeren dandang]

  6. 6.

    Improve ethical literacy [提升伦理素养, tisheng lunli suyang]

Except for “harmony and friendliness,” all of the Governance Principles are represented in items in the Ethical Norms document. Most can be traced to the six ethical requirements, but several fall under some of the more granular specifications. In particular, “promote agile governance” [推动敏捷治理, tuidong minjie zhili] is one of the management norms. Another one, “promote tolerance and openness” [促进包容开放, cujin baorong kaifang], reflects the sentiment of the Governance Principles items “tolerance and sharing” and “openness and collaboration” by encouraging cross-border cooperation and developing AI to solve economic and social development problems. Although “harmony” is referenced in Article 1 and as a component of “enhance human well-being” in the context of human-computer harmony and friendliness [人机和谐友好, ren ji hexie youhao], the top-level substitution of “enhance human well-being” for “harmony and friendliness” is notable. “Harmony” [和谐, hexie] is laden with Confucian connotations. It is a balancing process emphasizing internal stability and adjusting to one’s environment, but (Hine & Floridi, 2022a) noted that the CCP nonetheless has “an outward-facing vision to achieve global leadership in AI.” Earlier principle-sets reference AI for the “overall interests of mankind” or to “serve the progress of human civilization,” but the “human values” that should guide these are never defined (Beijing Academy of Artificial Intelligence, 2019; Ministry of Science and Technology, 2019). Dropping “harmony” in favor of a more generic (and thus more universally endorsable) principle may serve Beijing’s goal of becoming a global leader in AI ethics and governance.

The Ethical Norms are highly similar to the OECD AI Principles, which is notable because China has been critical of “universal” values as Western constructs. This is especially prevalent in the context of the Universal Declaration of Human Rights (UDHR). While it no longer calls them “a hypocritical farce” (Shirk, 1977), China has in recent years tried to re-shape the role of the UN Human Rights Council to undermine its functionality, submitting a proposal referring to “so-called universal human rights” (Richardson, 2020). However, China was one of the main drafters of the UDHR (Will, 2008; Zhao, 2015), and now its Ethical Norms are trending closer to “so-called universal” values. Unfortunately, like the US Blueprint and RMF (which also bear similarities to the OECD principles), sustainability is not a top-level line item. The Ethical Norms makes three references to “sustainable development” (可持续发展, kechixu fazhan) (Ministry of Science and Technology, 2021), but both the Beijing AI Principles and the Governance Principles foreground it in their introductions, although the Governance Principles less so than the Beijing AI Principles, showing a concerning decrease in prioritization of sustainable development.

The other ethics-related document to come out of China is the 2022 Position Paper of the People’s Republic of China on Strengthening Ethical Governance of Artificial Intelligence (Position Paper), which provides advice on AI regulation, R&D, use, and cooperation to the international community to advance international AI ethical governance (Ministry of Foreign Affairs of the People’s Republic of China, 2022b). It promotes “people-centered” AI [以人为本, yiren weiben]Footnote 9 and “AI for good” [智能向善, zhineng xiangshan] (Ministry of Foreign Affairs of the People’s Republic of China, 2022a, 2022b). Finally, it calls for:

the international community to reach international agreement on the issue of AI ethics on the basis of wide participation, and work to formulate widely accepted international AI governance framework, standards and norms while fully respecting the principles and practices of different countries’ AI governance (Ministry of Foreign Affairs of the People’s Republic of China, 2022a).

In concert with China’s new international-facing AI principles—and the fact that of all the documents, only the Position Paper has an officially released English translation—we can see that China is trying to present its leadership drive as beneficent while leaving room for its specific uses (and abuses) of AI. Like the US, its Good AI Society thus is not purely domestic, but sees China taking a more paternalistic role as compared to the US’s coalition-based approach. The US is transparent about wanting to promote its AI values worldwide; it would be unusual for China to fully abandon its distinct sense of AI values, and certainly for China to interpret more “generic” values in a way that would accord with American notions of human rights. Still, the call for ethics pluralism in the Position Paper [which was expressed by Chinese leaders as early as 2011 (Zhao, 2015)] may be an olive branch to the US-EU “democratic values” coalition and a rhetorical contrast to their hardline stance. It remains to be seen whether this call will be heeded.

4 Discussion

In this section, I will contextualize each country’s Good AI Society vision using a philosophy-of-technology level of abstraction drawn from (Hine & Floridi, 2022a). Both countries have complex histories with many influencing ideas and philosophies;Footnote 10 neither is a monolithic embodiment of a single philosophy and I do not intend to essentialize them by adopting this level of abstraction (Floridi, 2008). Rather, looking at two of the most influential philosophies from each country, which inherently inform each country’s philosophy of technology, offers a framework for analysis of a high-level overview of each country’s approach. Although it obscures some nuances, is valuable for projecting future paths and how they can be changed. The Protestant Ethic is often characterized as more individualistic and Confucianism more collectivistic,Footnote 11 but both want to benefit “the people” (Hine & Floridi, 2022a). Different definitions of “the people” hinder productive cooperation, but I argue that both philosophies can support values-pluralist global governance before contextualizing it in a broader foreign policy context.

The US’s Good AI Society is partially underpinned by the Protestant Ethic. Protestantism, as interpreted by Calvin, preached predestination (that one’s postmortem fate was predetermined), so material success was a way to show that one was surely saved (Weber, 2001). It thus emphasized the value of individual hard work. During the Industrial Revolution in the US, it paired with the technological sublime to influence the US’s drive to demonstrate its technological superiority and fostered a sense of American exceptionalism (Nye, 1994; Weber, 2001), which persists today. Though now secularized, the Protestant Ethic now manifests in the US’s historic laissez-faire approach to AI regulation, relying on industry self-regulation and nonbinding initiatives to not hamper innovation (Hine & Floridi, 2022a). Although concrete legislation is under development, it will be heavily influenced by industry, as shown by the administration leaning on industry players in panels and self-regulation.

The American worker is a crucial element in the creation of a Protestant Ethic-based Good AI Society, but is also vulnerable. The Trump administration acknowledged the American worker as “a vital national asset,” but was mostly concerned with ensuring that workers had the skills necessary to develop AI and to use it to make their work more efficient (“Artificial Intelligence for the American People,” n.d.). With its fundamental rights-influenced approach, the Biden administration is concerned not just with ensuring Americans have the skills to develop and use AI, but that they be protected from workplace abuses and replacement (Biden, 2023; White House Office of Science and Technology Policy, 2022). The Biden administration is treading a fine line between celebrating the tech workers who will create potentially disruptive technologies and acknowledging that other workers will need protection from those same technologies.

American exceptionalism includes not just American innovation, but “American values” as well. In the Trump era, US allies were expected to subsume their own goals to those of the US (Hine & Floridi, 2022a), but the Biden administration is putting allies on a more equal footing by promoting the superiority of “democratic” rather than “American” values. These are the foundation of a new values-aligned bloc that, through initiatives like the TTC, is creating shared definitions and consolidating discussion around “trustworthy” AI. The US’s Good AI Society is generally outward-facing, reflecting the Protestant Ethic’s drive for “world mastery” (Hine & Floridi, 2022a) that encouraged expansionist and imperialist foreign policy (Münch, 2001, 237). This attitude is also seen in digitally expansionist measures aimed at hobbling China’s semiconductor industry and insistence that AI must follow superior “democratic values.” However, the Biden administration is also taking an introspective gaze through initiatives like the Blueprint to seriously consider what a US Good AI Society should look like domestically, not just in relation to allies and competitors.

This turning inward could be significant for domestic regulation, and its recruitment of allies could result in some international AI regulation, but its exclusionary definition of “the people” as those who share “democratic values” will never permit truly international governance. The influence of the Protestant Ethic, to say nothing of the US’s current foreign policy, both of which emphasize American exceptionalism, may preclude the acknowledgment of another values-set as equally valid internationally. However, a values-pluralist version of the Protestant Ethic that returns to its roots of celebrating hard work and innovation for the sake of the people—regardless of what ethical system they subscribe to—could create a more inclusive foundation for international global governance. Under the Trump administration, the US’s definition of “the people” was highly US-centric, so by incorporating allies into its Good AI Society there has already been a broadening, although it is a larger extension to include non-democratic countries.

China’s Good AI Society is influenced by Confucianism, but it—and the country as a whole—is not a monolithic embodiment of Confucian thinking. Still, Confucianism, which has been referenced in foreign policy for decades (Niquet, 2011), impacts governance. In antiquity, Confucianism offered advantages in flexibly governing a massive country populated by groups with different cultures and customs (Hine & Floridi, 2022a; Goldin, 2015; Hsiung, 2011). It emphasizes hierarchical relationships and following the dao as a process of harmonization (Wong, 2012; He 2015). Now, it manifests in AI policy through an internal focus and prioritization of social stability (Hine & Floridi, 2022a). However, this is evolving, demonstrated by the Generative AI Regulation’s insistence that development is now on an equal footing with security (Cyberspace Administration of China, 2023b) (Art. 3). The government is given wide latitude in AI regulations through national security carveouts, while protecting people from abuse by corporations, which face an expectation of cooperation rather than wooing by the government. In the Confucian Good AI Society, innovation is harnessed by the government, which issues centralized guidance and funding, while in the Protestant Ethic Good AI Society, it is unleashed with minimal (though increasing) interference. The other major evolution is in China’s ethics principles, which are losing their distinctiveness (including the explicitly Confucian emphasis on “harmony”) and aligning more with international principles. This may be a deliberate strategy so that the Position Paper can paint China as an experienced leader rather than a strongman, aiming for a kind of global influence different from the Protestant Ethic’s drive for world mastery (Nye, 1994; Weber, 2001). This also may facilitate establishing a thin normative foundation of shared values. However, this contrasts with other aspects of China’s foreign policy, which has explicitly invoked Confucian notions of harmony and being a “good neighbor” and used Confucius Institutes to portray itself as wise and beneficent as it developed (Niquet, 2011).

On a high level, Confucianism’s emphasis on harmony and righteousness is more inherently pluralist than the Protestant Ethic. A righteous ruler is one who rules by moral authority rather than force, discouraging domestic conflict. Externally, Confucianism’s emphasis on being a “good neighbor” has been used to portray China as fundamentally peaceable (Cao, 2007; Zhao, 2018), and the idea that one of superior character pursues harmony but not uniformity (和而不同, he er bu tong) also appears in Chinese foreign policy (Cao, 2007). Indeed, so long as countries embody their appropriate roles, international conflict is not legitimate (Shih, 2022), meaning international values pluralism is possible under Confucianism. Using AI under a different values system would not cross the threshold of intolerability, as acknowledged in the Position Paper, which calls for ethical pluralism and tolerance. Thus, the co-existence of two Good AI Societies under different value-sets is permissible from the Confucian perspective and reflects in the calls for ethical pluralism emanating from China.

Because Confucianism emphasizes hierarchy and preserving stability, it can lend itself to a definition of “the people” as only those who do not threaten social stability, justifying measures that violate human rights, like Uighur oppression (Human Rights Watch, 2021). This in turn justifies the US refusing to cooperate. A global evolution of the Confucian Good AI Society, justified by China’s stated desire to be a world leader and work with other countries, could discourage these human rights-violating actions. The best way to achieve harmony with disruptive technology is to govern it effectively, and AI will require global governance. The US will not cooperate with states engaged in persistent human rights violations, which threatens governance efforts and thus stability. So, it would be beneficial to the broad mission of harmony to cease these violations. This will also require compromise from the US, which must accept working with countries that do not share its exact values.

However, it must be acknowledged that philosophy and rhetoric are not always reality. Despite China’s use of Confucian rhetoric to portray itself as pacifistic (Cao, 2007), its saber-rattling around Taiwan calls that into question. China has historically portrayed itself as an international victim and peace-seeker (Cao, 2007; Zhao, 2018), and the Position Paper does this as well. China may be self-representing as well-intentioned and beneficent in AI, but whether it is walking the walk depends on whether it is willing to engage with other countries and roll back its human rights-violating AI systems. However, the onus is on the US and other countries to engage with China’s ostensible olive branch. This could be criticized as appeasement, which has been denounced for failing to avert negative outcomes (Richardson, 1988), but appeasement requires concessions; engagement is not appeasement. The US should come to the table with China, as it was willing to do at the UK AI Safety Summit, and determine if constructive engagement is possible.

5 Conclusion

Both the US and China’s visions of a Good (domestic) AI Society have evolved over the past two years. The US sees a society where Big Tech companies drive innovation but are prevented from running roughshod over fundamental rights, although they will have significant roles in crafting regulations. It also anticipates working with values-aligned countries to strengthen global AI governance (Roberts et al., 2021; Hine & Floridi, 2022a). The fundamentals of “democratic values” may be extended into the development process if the NAIRR is implemented, but the US’s largest issue is that many of its proposals are unenforceable, and tangible legislation is not guaranteed to pass in a heavily partisan Congress. Anti-China sentiment is a unifying force for Democrats and Republicans, but while it might facilitate domestic AI governance, it is a major block for global governance efforts.

China is increasingly prioritizing innovation and making concessions to industry in its regulations, which are largely, but not totally, driven by the central government. Although China’s rhetoric is less explicitly competitive, its intention to become a global leader in AI ethics and governance will be perceived as threatening by the US. China does profess a desire for collaborative international governance, but whether this extends to accepting a multipolar AI governance situation remains to be seen, and foreign policy precedent suggests that it may be disingenuous. Still, neither country wants conflict, and both are trying to create a Good AI Society for “the people.” However, if “the people” are to be defined as those who share a particular value-set, that will preclude cooperation and should be expanded.

Philosophically, the Chinese and American approaches are not incompatible with global AI governance. However, philosophy does not translate seamlessly to reality, and many other geopolitical and economic factors impact the situation. This article is not so naïve as to suggest that both countries will drop their myriad conflicts and join hands to promote global AI governance, but there is hope for productive action. In light of both the US and China’s Good AI Societies emphasizing international governance, reflecting on underlying philosophies of technology may help establish a base for cooperation on global governance. China, rhetorically and philosophically, seems willing to entertain global, value-pluralistic AI governance. It is the US, with its hard line around the superiority of “democratic values,” that will need convincing. There was a glimmer of hope at the UK AI Safety Summit in November of 2023, where the US and China both signed the Bletchley Declaration that called for “international cooperation” for safe and “trustworthy” AI development, including “the protection of human rights,” although it notes that “national circumstances” may affect how AI risks are classified (“The Bletchley Declaration by Countries Attending the AI Safety Summit, 1–2 November 2023”, 2023). It may continue to fall on third parties to bring the US and China to the same table. A thin normative foundation of individual and collective rights, perhaps based on the human rights treaties that are ostensibly internationally supported (and mentioned in both US and Chinese AI documents), could help ground these talks (Prabhakaran et al., 2022). Additionally, Track II dialogue, which has been ongoing in AI and national security (Hass & Kahl, 2024), could help lay the groundwork for higher-level discussions. Realistically, global governance will likely involve strengthening the global weak regime complex rather than establishing a new global body or treaty (Roberts et al., 2024). Still, a suitable international venue, such as the UN or OECD could bring the US, China, and other actors to the same table and establish a concrete foundation for a Good Global AI Society.