In October of 2022, the White House Office of Science and Technology Policy released the “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People” (the “Blueprint”) after a yearlong public engagement process. When the process began in 2021, the stated goal was for “a bill of rights for an AI-powered world” (Lander, 2022), but the finished product, described as a “white paper” and a “framework”(White House Office of Science and Technology Policy, 2022a), is a much less ambitious, non-binding guide to the office’s vision for AI development. Although the principles it lays out are not yet enforceable, it is the clearest look yet at the Biden administration’s definition of a “Good AI Society” (Cath et al., 2018). In this Comment, we analyze that vision and compare it to the two most prominent players on the global AI stage, the European Union (EU) and China. We argue that it is a unique and promising, if flawed, advance in US AI governance, but requires concrete legislation to be effective in the private sector.

1 An Unenforceable but Promising Start

As a non-binding white paper, the Blueprint outlines principles and ideas for how to put them into practice, rather than concrete requirements and actions. It shares with the EU’s Artificial Intelligence Act (AIA) the same product-oriented approach to AI as a system, not as a source of outputs, which then should be regulated. However, instead of attempting to define AI, as the EU is struggling to do in the AIA, the Blueprint applies to “automated systems” that could “meaningfully impact” the American public’s civil rights, civil liberties, and privacy; access to equal opportunities; or “access to critical resources or services” (White House Office of Science and Technology Policy, 2022a). This is a step in the right direction, that is, towards regulating the uses of AI, not AI itself. However, it is insufficiently bold in its approach to implementation, and the appendix definition of an “automated system” is too inclusive: “any system, software, or process that uses computation as whole or part of a system to determine outcomes, make or aid decisions, inform policy implementation, collect data or observations, or otherwise interact with individuals and/or communities” (White House Office of Science and Technology Policy, 2022a). This could describe almost any system with which a user interacts, as computation is inseparable from software and hardware. However, the requirement that the impact is “meaningful” would limit the automated systems under consideration to AI systems in critical applications, provided the “meaningful” specification itself is meaningful. This resembles some of the modifications the Czech Presidency recently suggested to the AIA (Bertuzzi, 2022a).

The Blueprint outlines five principles:

  • Safe and effective systems.

  • Protection from algorithmic discrimination and inequitable systems.

  • Protection from abusive data practices and agency about how personal data is used.

  • Knowledge of when an automated system is used and understanding of its impacts.

  • The ability to opt out of automated systems for a human alternative.

For each principle, the Blueprint lays out its importance and what should be expected of automated systems to conform to the principle, and outlines how it can be put into practice through broad descriptions of examples. It contains no enforcement mechanisms and thus has been criticized as “toothless” (Johnson, 2022). Still, as a strategic document, it may have a federal impact even without legislative efforts to codify its ambitions.

2 An Evolution of the US Good AI Society

Each administration since Obama has suggested a new idea of a Good AI Society. Roberts et al. (2022) evaluate AI governance approaches by looking at how governments enact their goals and whom policies are intended to benefit. This framework helps assess the Blueprint and compare its vision to others. In terms of whom is to benefit, the vision laid out in the Blueprint focuses on empowering communities through protecting individual rights. Noting that existing frameworks offer inadequate protections when impacts manifest at the community level, the Blueprint provides a broad definition of community that includes both neighborhoods, on- and offline social connections, formal organizational ties, and affinity- or identity-based connections (White House Office of Science and Technology Policy, 2022a). It states that harms perpetuated by automated systems must be assessed and redressed at both the individual and community level. This represents a new avenue in the US AI vision. Furthermore, in writing in the second person, the Blueprint deliberately addresses readers as members of the American public rather than users or consumers. In its focus on equity, countering discrimination, and listening to public input, the Blueprint incorporates aspects of the Obama era of AI regulation (Hine & Floridi, 2022), but its explicit focus on community-level equity is new and unique internationally. Unfortunately, like the AIA, there is no mention of the environment, as if AI did not have an impact on it, and such impact had no significance for society (Cowls et al., 2021). A broad interpretation of community impact could include climate change, but the latter does not seem to be a priority in the Blueprint.

The Obama administration issued several AI-related documents in its final year, outlining a diversity-focused approach aiming to develop AI “by and for” diverse populations (Executive Office of the President, 2016). The Blueprint focuses less on increasing the diversity of the development pipeline and more on ensuring that different communities’ voices are heard throughout the development process, mirroring the year of public input that shaped the Blueprint. This outlines a focus on equity not predicated on the slow reshaping of the software industry. The Obama era also began emphasizing a faith in American innovation that continued into the Trump administration, which centered its nebulous Good AI Society vision on promoting ill-defined “American values” domestically and abroad for the benefit of the US and its allies, but not its main competitor, China (Hine & Floridi, 2022). At the beginning of Biden’s term, his administration’s approach was more in line with the ethics-centered rhetoric of the Obama era, but retained aspects of the Trump era’s emphasis on trustworthy AI – also referenced in the Blueprint, though perhaps rendered irrelevant by its new principles – and competition with China Hine & Floridi, 2022. The Blueprint, however, is focused on the American people, rather than US enemies and allies.

Regarding how the vision will be enacted, federal departments announced action across labor, consumer protection, education, healthcare, and housing (White House Office of Science and Technology Policy, 2022b). Some of these, such as a collaboration between the Department of Labor and the Equal Employment Opportunity Commission (EEOC) on reimagining hiring and recruiting practices, including the use of automated systems, or a crackdown on algorithmic discrimination in the financial system, could have tangible impact. However, as of early 2023, reports by the Department of Health and Human Services on health equity and the Department of Energy on responsible AI due in 2022 have yet to materialize, raising questions about follow-through. Furthermore, private sector action is unlikely without further legislative action. The Blueprint focuses mainly on rights and less on duties and on promoting new AI development, but has an implicit faith in “the power of American innovation” woven throughout (White House Office of Science and Technology Policy, 2022a). Unlike in the Trump administration, this faith is more cautious. The Blueprint notes that automated systems can drive innovation, but also have the potential to embed discrimination (White House Office of Science and Technology Policy, 2022a). Thus, the Blueprint can be seen as a set of guardrails preventing the unfettered power of American innovation from causing negative externalities rather than a motor spurring new developments.

The Blueprint itself does not concern itself with international cooperation or competition, although soon after its release, new sanctions targeting the Chinese semiconductor industry were announced (Brands, 2022), which continues the escalation conducted over the Trump and the start of the Biden administrations (Hine & Floridi, 2022). It is internally focused and more concerned with encouraging the development of domestic, industry-specific standards rather than influencing international standards. The most significant external effort – under the header “Leading by example and advancing democratic values” in the fact sheet accompanying its release – is the launch of an AI Action Plan for the United States Agency for International Development, a development assistance organization. Even the document acknowledges that this is to “sustain American global leadership” rather than advance it (White House Office of Science and Technology Policy, 2022b). This domestic focus is perhaps appropriate for a document that brands itself as a “Bill of Rights,” as the original Bill of Rights was written to guarantee the rights of the American people.Footnote 1 Regardless, this turning inwards is a notable development.

3 Comparing to AI Peers: Promise and Concerns

A fruitful way to understand the nature and implications of the community-oriented Blueprint is to compare it to the Good AI Society visions of the two other major AI powerhouses, the EU and China. The EU’s Good AI Society takes an “explicitly individual-focused,” human-centric approach, which involves promoting development in the public and private sectors and coordinating across Member States (Roberts et al., 2022). China’s approach is highly innovation-focused, with initiatives to promote specific developments at all levels of government and industry, and societally oriented, prioritizing social stability over the rights of the individual (Hine & Floridi, 2022; Roberts et al., 2022).

Since the Blueprint is not a binding document, it will not independently enact the administration’s goals. Throughout, it acknowledges the need for more concrete legislation and hints at how its vision might be brought to life. It calls for a comprehensive data privacy law rather than the “patchwork” that exists today, acknowledging data privacy as “fundamental” to enabling the other principles (White House Office of Science and Technology Policy, 2022a). Many of the ideas it proposes – like meaningful consent to data collection and the right to correct erroneous personal data – are similar to clauses in the EU’s 2018 General Data Protection Regulation (GDPR), or two of its relatives, China’s 2021 Personal Information Protection Law (PIPL) (Swabey, 2021) and the California Consumer Privacy Act (CCPA) (Keane, 2021). One stated aim of the Blueprint is that the public should be protected from “unchecked surveillance,” but it has been criticized for normalizing surveillance in a manner more akin to China than the EU (Fox Cahn, 2022). The EU’s AIA may ban both real-time and ex-post biometric identification via surveillance devices, except to identify victims or perpetrators of crime or to prevent an imminent threat (Bertuzzi, 2022b). China’s PIPL, on the other hand, permits mass surveillance in the name of national security (Roberts et al., 2022). The Blueprint states that “Surveillance should be avoided unless it is strictly necessary to achieve a legitimate purpose and it is proportionate to the need” and that surveillance must not limit civil liberties or civil rights (White House Office of Science and Technology Policy, 2022a). However, the broad language leaves the door open to surveillance, including real-time surveillance in public areas, with a potential negative impact on segments of the population already subject to heightened nontechnical surveillance. Furthermore, while the AIA would ban AI applications deemed unacceptably risky and put increased requirements on high-risk systems, the Blueprint does not forbid any applications (Eliot, 2022). It only states that systems that violate its principles should not be developed. Thus, although this new Good AI Society is meant to support individuals and communities, the latitude in its principles may mean that mass surveillance and unethical systems could increase.

One promising effort is in algorithmic explanation. The Blueprint states that users should be provided “generally accessible plain language documentation” outlining the system, the role of automation, and an explanation of how an outcome was reached. The GDPR ostensibly enshrines a right to “meaningful information” about the logic behind a decision, but the ambiguity of the provisions raises substantial questions about the scope of the “right to explanation” and its feasibility in practice (Kaminski, 2019; Wachter et al., 2017). China’s “Provisions on the Management of Algorithmic Recommendations in Internet Information Services” requires only an explanation when “rights and interests” are majorly impacted (China Law Translate, 2022). The Blueprint’s general specifications could balance technical feasibility with information usability and entitle individuals to truly meaningful explanations of algorithmic decisions.

However, some of its more specific provisions could be technically infeasible. On top of its general explanation requirements, the Blueprint specifies that only “fully transparent,” explainable models should be used in areas where “extensive oversight is expected,” such as criminal justice. It is unclear what “fully transparent” means, and many algorithms that rely on machine learning are inherently inscrutable. China’s growing use of AI in courts has raised concerns around the explainability of decisions and appropriateness of training data (Stern et al., 2021). The AIA considers AI in the criminal justice system to be high-risk and thus places it under the same data quality requirements as other high-risk applications. However, it does not impose additional explainability requirements beyond what the GDPR vaguely contains – users merely have to be able to “interpret the system output and use it appropriately” (Permanent Representatives Committee, 2022). The three different standards could lead to substantially different uses of AI in criminal justice that could differentially threaten individual rights. However, the US’s approach risks precluding any possible benefits.

A major component of national AI governance strategies is promoting innovative developments in AI. The Blueprint does not outline explicit measures to promote innovation. It does not take a “wish list” approach to developments like China Hine & Floridi, 2022. Nor does it encourage the formation of regulatory sandboxes or increase SME support like the AIA. Indeed, the accompanying Fact Sheet’s entrepreneur support measures are a forthcoming risk management framework and a continuation of existing National Science Foundation funding, leaving the innovation landscape essentially unchanged. It is also unclear whether any auditing-like approach, as present in the AIA, could be created based on the Blueprint.

4 Unequal Impacts on the Public and Private Sectors

The Blueprint is already being promoted across the federal government. Its wide-ranging efforts could have significant impacts on aspects of American life. Some industry actors could adopt measures from the Blueprint without further incentive, and the frameworks being developed by federal agencies could trickle down to industry. However, the Blueprint is unlikely to impact the private sector unless, as it suggests, additional legislation is advanced (Johnson, 2022). As a “Bill of Rights,” it could be much more ambitious about spurring action. A bill of rights is meant to guarantee a set of rights to specific people (Merriam-Webster, n.d.); it must, by definition, be enforceable. The original Bill of Rights did not begin as a “blueprint,” but as a way to define and guarantee Constitutional freedoms at the dawn of a new era in America. For the Blueprint to do the same in the era of AI, it must have teeth. More of the administration’s vision may become a reality if the Blueprint spurs action on the stalled Algorithmic Accountability Act of 2022 or the American Data Privacy and Protection Act. Another avenue for impact would be if it inspires state legislation on the level of the CCPA or state-level biometric privacy laws, such as the Illinois Biometric Information Privacy Act, under which Facebook paid $550 million to settle a class-action lawsuit regarding its use of facial recognition without notification (Pester, 2020). The Blueprint represents a promising (if significantly improvable) development of the American Good AI Society that emphasizes the protection of communities as well as individuals. However, without concrete legislation to back it, it risks remaining a mere invitation.