1 Introduction

In the current frenzy of concern around artificial intelligence, there are numerous fault lines and competing camps: “AI doomers,” [1] “techno-optimists” [2] and tech ethicists disagree on which AI harms should take precedence, the likelihood that AI could become sentient, and the appropriate regulatory strategies for overseeing risks associated with AI. With billions of dollars at stake and potentially significant geopolitical implications, there is considerable upside for those able to frame these AI debates using their preferred terminology. Changes in language often contribute to AI hype by shifting the common parlance from a purely descriptive term to one with connotations suggesting that AI is more capable, powerful, or human-like than is warranted.

2 Reshaping the linguistic terrain of AI

Several efforts to shift the linguistic terrain of AI have succeeded in recent years. In August 2021, the Stanford Institute for Human-Centered Artificial Intelligence (HAI) coined the term "foundation model,” giving a new name and characterization to what until then had simply been known as “large” machine learning models. HAI defined a “foundation model” as a machine learning model that is “trained on broad data (generally using self-supervision at scale) that can be adapted to a wide range of downstream tasks.” [3] Whereas the term “large machine learning model” places the emphasis on the amount of data required to power the model, the term “foundation model” emphasizes the model’s potential function or capability— that is, the new term implies that these models will serve as the “foundation” for a wide range of tasks. While it was not actually the case at the time that so-called “foundation models” were being used for a “wide range of downstream tasks,”Footnote 1 the coining of the term was in effect an effort to bring about a future in which that situation would come to fruition.

At the time Stanford was inventing the term “foundation models,” large language models were already receiving criticism for their tendency to “encode and reinforce hegemonic biases,” be misperceived as understanding language and having communicative intent, produce false and inaccurate outputs, and leak personally identifiable information. [4] The very public fallout from that critique was cited by critics as a potential motivator for the re-naming of “foundation models.” [5] In a world that relies on search engine optimization for information retrieval, re-branding large machine learning models was a way to hide the history of their criticism. So the term “foundation models” contributed to AI hype by simultaneously suggesting that large machine learning models were in much wider use than was in fact the case at the time, as well as attempting, via a re-brand, to elide high-profile criticism leveled at those same models.

In a slightly different vein, the term “hallucination” [6] was popularized by Google [7] in the aftermath of ChatGPT’s sudden rush to worldwide popularity. The term “hallucination” is meant to describe the behavior of large language models when they output false (but often plausible-sounding) information in response to a user query. Critics pointed out that this language is anthropomorphizing. [8] The original definition of “hallucinate” was “to seem to see, hear, feel, or smell something that does not exist, usually because of a health condition or because you have taken a drug.” [9] As machine learning models a) do not see, hear, feel, or smell, and b) have no direct experience of what does or does not exist, “hallucination” is a misleading (and anthropomorphizing) term to use in the context of an AI tool; the word assigns to AI a human-likeness that it does not actually possess. Nevertheless, the term “hallucinate” has been broadly adopted. In a major win for AI hype, the Cambridge Dictionary named “hallucinate” the Word of the Year for 2023 [10] and updated the definition [9] to include its use in AI-related contexts.

3 The birth of “frontier AI”

Now, researchers and policymakers with ties to the Effective Altruist movement have coined another new term designed to feed the AI hype cycle: “frontier AI.” In July 2023, researchers with ties to the Future of Humanity Institute (FHI) [11]—an organization famously focused on the “existential risk” that AI supposedly poses to humanity—published a non-peer-reviewed paper [12] on arXiv bestowing a new name on a particular conception of artificial intelligence: “frontier AI.” The paper’s authors define “frontier” AI as “highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety.” The authors of the paper have close ties to the Effective Altruist (EA) movement,Footnote 2 which has promoted a focus on “existential risk” and whose adherents have taken (arguably quite successful) steps to influence government AI policymaking in that direction. [13]

In April 2023, tech investor and entrepreneur Ian Hogarth published a viral opinion piece in the Financial Times [14] speculating that large commercial AI models pose an “existential threat” to humanity and must be reined in. In June 2023, the UK announced that Hogarth would chair the government’s Foundation Model Taskforce. [15] The EA paper was published in July and was apparently presented to the UK prime minister’s office, with the PM’s special advisors on AI and the director and deputy director of the Department for Science, Innovation and Technology (DSIT) in attendance. [16] In September, the UK government both re-named the Foundation Model Taskforce as the “Frontier AI Taskforce” [17] and announced that it would hold an invitation-only “AI Safety” Summit focused specifically on “frontier AI.” The UK government is arguably in the thrall of Effective Altruists and their obsession with “existential risk.” [16] In a matter of months, individuals with strong ties to the AI “doomer” camp were able to get their preferred framing both adopted and prioritized by one of the most powerful governments in the world.

But what is “frontier AI?” What is at stake for society when this new language is adopted to describe AI technologies?

4 Defining “frontier AI”

Frustratingly for our purposes, the definition of “frontier AI” is quite fuzzy. According to the UK government, “frontier AI” consists of “highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models.” [18] Already with this definition, the emphasis is on the speculative future rather than the actual present. “Frontier AI” will “match or exceed” what “today’s most advanced models” are capable of.Footnote 3 The term designates “highly capable foundation models that could possess dangerous capabilities.”Footnote 4 In other words, “frontier AI” is speculative—it does not even exist yet, but the implication is that it is right around the corner.

The idea of soon-to-be massively powerful AI technologies is a key indicator of the hype-inflated worldview inherent in the term “frontier AI.” The people popularizing this term are deeply concerned with what they call “existential risk” and “value alignment.” That is, they fear that a super-powerful AI could someday (perhaps quite soon) go rogue, leading to the destruction of humanity. As a result, they are invested in finding ways to attempt to align the “values” of AI with those of humanity. Often, the very same people making these claims are actually building the technologies they claim to so greatly fear.Footnote 5 [19] Critics have argued that stoking fears of “existential risk” is really just another form of AI hype: if it is powerful enough to kill us all, then it must be really powerful. Focusing on far-future dangers is a means of diverting attention from the regulation-free status quo, which works in favor of the small cadre of companies developing these AI models.

Not all AI experts [20] believe that “existential risk” is a reasonable fear, [21] but there are high-profile people in the field who do, and they have a lot of money to mobilize in service of promoting their views. In fact, the Effective Altruist movement is explicitly focused on earning as much money as possible (so that they can use those funds to “do the most good”). One notable example of how the EA philosophy can fail in spectacular ways is that former billionaire and convicted fraud [22] Sam Bankman-Fried was an EA adherent. Bankman-Fried was a high-profile member of the Effective Altruist community.

As outlined above, the authors of the July paper introducing the term “frontier AI” have links to the Future of Humanity Institute at Oxford. FHI’s founding director is philosopher Nick Bostrom, who is well known for his interest in existential risk and other highly speculative potential futures with AI technologies. Bostrom is a leading figure in what has been dubbed the TESCREAL ideologies [23], a constellation of linked beliefs held by a small group of influential and controversial individuals who are attempting to steer the AI agenda toward the creation of artificial general intelligence (AGI). A significant portion of the concern around “existential risk” hangs on the fear that AGI will achieve self-consciousness and then turn its aims to the destruction of humanity.

Given its origins and the connotations within its definition, anytime someone adopts the term “frontier AI,” they are effectively endorsing this very specific set of beliefs about AI and its capabilities. At the same time, using the term gives credence to the idea that this is a technology that already exists. If we constantly talk about “frontier AI” as though it is a real thing, the fact that the term apparently refers to the potential existence of certain powerful or dangerous AI capabilities becomes an inconvenient afterthought.

5 Connotation vs denotation

The development of new vocabulary is often an exercise in trying to shift the discourse by using words that have not just a denotation—an explicit or direct meaning—but also a particular connotation—what the word implies or suggests. The words we choose to name things are often deeply connected to values and commitments that influence our perceptions of the thing that is named. “Pre-born baby” [24] and “embryo” or “fetus” nominally describe the same thing, but the former connotes a particular worldview and set of values, focused on opposition to abortion. People in the “pro-life” movementFootnote 6 are eager to get others to adopt this vocabulary and framing. Similarly, the TESCREALists behind “frontier AI” are hoping to get this language adopted to advance their own agenda.

So what might “frontier AI” connote? To start, the word “frontier” evokes the American “Wild West” and its colonial mentality: a place where civilized white men venture into the dangerous unknown and use lethal force to dominate and exploit what they find there, for profit. Others [25] have argued [26] that machine learning reproduces colonial logics, and the term “frontier AI” certainly invokes and reinscribes this colonial dynamic. We can also see the colonial dynamic between the handful of powerful Western companies who produce today’s generative AI models and the people of the “Global South” [27] who are most likely to experience harm as a direct result of the development and deployment of these AI technologies.

The UK’s AI Safety Summit agenda apparently argued that the frontier is “where the risks are most urgent” and “where the vast promise of the future economy lies” [28]—marking the frontier as a place of great opportunity, but also great danger. Thus, “frontier AI” becomes a verbal proxy for the “existential risk” that TESCREALists are constantly pushing, with the added implication that such powerful technology can also lead to large profits for those able to harness it.

A technological frontier also calls to mind the glittering expanse above us—space.Footnote 7 TESCREALists like billionaire and SpaceX founder Elon Musk are keen to lead the charge into the stars, around which they plan to build vast computer simulations in which astronomical numbers of digital people will live out happy lives. [29] According to the flavor of utilitarianism known as longtermism (the “L” of “TESCREAL”), due to their overwhelming numbers, the sum total happiness of these hypothetical space-based future digital people dwarfs the well-being of all actually living people on Earth; therefore, ensuring that these future digital souls come into existence is, on their view, a moral duty. The path to bringing those souls into being starts with the development of AGI.

The word “frontier” also implies a fixed boundary, slowly receding in a linear fashion,Footnote 8 as technology makes steady progress into the future, wresting the known from the formerly unknown. This, too, is misleading. Technology does not progress in a steady, linear march. Sometimes it limps along, as in the “AI winter,” [30] until something changes to renew its progress, and sometimes people stand up to refuse a particular technology, halting its “progress” for the sake of values they see as more important. [31]

Profit. Danger. Outer space. Progress. These are the connotations of “frontier AI.” It should be obvious that “frontier AI” is an exercise in AI hype, given these connotations. Today’s AI tools have not yet demonstrated that they have a sustainable business model. [32] The main dangers that “frontier AI” boosters are concerned with are hypothetical “existential risks.” We are nowhere near colonizing outer space. With generative AI threatening the livelihoods of creatives, writers, illustrators, and actors, it is clear that not everyone agrees that today’s AI tools represent progress.

6 Reasons to reject “frontier AI”

We need to push back against the term “frontier AI” before it becomes uncritically adopted by the press and others. It is not clear that the thing TESCREALists want to name with the words “frontier AI” even exists—but if it does, then “highly capable generative AI models,” while less pithy, is perfectly serviceable and much more matter of fact. (I think even “foundation models” would be preferable.)

As we have seen with terms like “foundation model” and “hallucination,” the press and others with an investment in staying up to date with technology are typically eager to adopt the latest terminology so as to appear in the know. (Special language is a way to distinguish insiders from outsiders, as anyone navigating the sea of acronyms at most universities can attest.)

Pushing out the term “frontier AI” is a way for its boosters to both frame the conversation around what they think is important while simultaneously re-brandingFootnote 9 large-scale generative machine learning models to, once again, divorce them from prior criticism. Large-scale generative machine learning models have been shown to cause several types of actual, non-hypothetical harms, including psychological harms [33, 34], social harms [35, 36], and environmental harms [37, 38]. “Frontier AI” encourages us to look past those very real harms in favor of a focus on the hypothetical future.

Fortunately, the same researchers who have been leading the charge to uncover and highlight real AI harms and to develop robust AI governance mechanisms have already begun speaking out against efforts to shift the verbal terrain of AI. In a letter to the editor of the Financial Times responding to Hogarth’s “god-like AI” piece, Mhairi Aitken, Alan Turing Institute Ethics Fellow, wrote, “Words matter, and how we talk about AI has very real implications for how we engage with AI.” [39] Michael Birtwistle, the associate director of law and policy at the Ada Lovelace Institute, is quoted in the Guardian highlighting the hypothetical nature of “frontier AI:” “Policymaker attention and regulatory efforts are concentrated on a set of capabilities that don’t exist yet, a set of models that don’t yet show those capabilities.” [28] Meredith Whittaker, President of Signal, was quoted on the subject of “existential risk” as saying, “I think we need to recognize that what is being described, given that it has no basis in evidence, is much closer to an article of faith, a sort of religious fervor, than it is to scientific discourse.” [40]

We should view the term “frontier AI” with skepticism if not outright suspicion, as yet another Trojan horse of AI hype. If we are to use this term at all, we should follow the standard set by Shannon Vallor and Ewa Luger, researchers and co-principal investigators of the UK government-funded “Bridging Responsible AI Divides” (BRAID) programme. [41] In a blog post excoriating the government’s sole focus on technical expertise as the path to “AI safety,” they correctly append the modifier “so-called” before “frontier AI” and always use the word “frontier” in scare quotes. [42] Journalists covering the current debates around AI should follow this practice; otherwise, they will effectively be endorsing the idiosyncratic views of those they ought to be reporting on with impartiality. Speaking of a hypothetical entity as though it is real puts journalists in the uncomfortable position of reporting on the future as though it has already happened.

“Frontier AI” is AI hype. It is hypothetical, not real. It distracts focus away from AI’s actual harms by focusing on so-called “existential risk.” It carries with it connotations of colonialism and conquest that we should not be endorsing. The sooner everyone stops using this term, the better.