Abstract
In the mid-2023, promoters of artificial intelligence (AI) as an “existential risk” coined a new term, “frontier AI,” that refers to “highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety.” Promoters of this new term were able to disseminate it via the United Kingdom (UK) government’s Frontier AI Taskforce (formerly the Foundation Models Taskforce) as well as the UK’s AI Safety Summit, held in November 2023.
I argue that adoption of the term “frontier AI” is harmful and contributes to AI hype. Promoting this new term is a way for its boosters to focus the public conversation around the AI-related risks they think are most important, namely “existential risk”—a scenario in which AI is able to bring about the destruction of humanity. Simultaneously, “frontier AI” is a re-branding exercise for the large-scale generative machine learning (ML) models that have been shown to cause severe and pervasive harms (including psychological, social, and environmental harms). Unlike “existential risk,” these harms are actual rather than theoretical, whereas the term “frontier AI” moves our collective focus away from actual harms to focus on hypothetical doomsday scenarios.
Moreover, “frontier AI” as a term invokes the colonial mindset, further reinscribing the harmful dynamics between the handful of powerful Western companies who produce today’s generative AI models and the people of the “Global South” who are most likely to experience harm as a direct result of the development and deployment of these AI technologies.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In the current frenzy of concern around artificial intelligence, there are numerous fault lines and competing camps: “AI doomers,” [1] “techno-optimists” [2] and tech ethicists disagree on which AI harms should take precedence, the likelihood that AI could become sentient, and the appropriate regulatory strategies for overseeing risks associated with AI. With billions of dollars at stake and potentially significant geopolitical implications, there is considerable upside for those able to frame these AI debates using their preferred terminology. Changes in language often contribute to AI hype by shifting the common parlance from a purely descriptive term to one with connotations suggesting that AI is more capable, powerful, or human-like than is warranted.
2 Reshaping the linguistic terrain of AI
Several efforts to shift the linguistic terrain of AI have succeeded in recent years. In August 2021, the Stanford Institute for Human-Centered Artificial Intelligence (HAI) coined the term "foundation model,” giving a new name and characterization to what until then had simply been known as “large” machine learning models. HAI defined a “foundation model” as a machine learning model that is “trained on broad data (generally using self-supervision at scale) that can be adapted to a wide range of downstream tasks.” [3] Whereas the term “large machine learning model” places the emphasis on the amount of data required to power the model, the term “foundation model” emphasizes the model’s potential function or capability— that is, the new term implies that these models will serve as the “foundation” for a wide range of tasks. While it was not actually the case at the time that so-called “foundation models” were being used for a “wide range of downstream tasks,”Footnote 1 the coining of the term was in effect an effort to bring about a future in which that situation would come to fruition.
At the time Stanford was inventing the term “foundation models,” large language models were already receiving criticism for their tendency to “encode and reinforce hegemonic biases,” be misperceived as understanding language and having communicative intent, produce false and inaccurate outputs, and leak personally identifiable information. [4] The very public fallout from that critique was cited by critics as a potential motivator for the re-naming of “foundation models.” [5] In a world that relies on search engine optimization for information retrieval, re-branding large machine learning models was a way to hide the history of their criticism. So the term “foundation models” contributed to AI hype by simultaneously suggesting that large machine learning models were in much wider use than was in fact the case at the time, as well as attempting, via a re-brand, to elide high-profile criticism leveled at those same models.
In a slightly different vein, the term “hallucination” [6] was popularized by Google [7] in the aftermath of ChatGPT’s sudden rush to worldwide popularity. The term “hallucination” is meant to describe the behavior of large language models when they output false (but often plausible-sounding) information in response to a user query. Critics pointed out that this language is anthropomorphizing. [8] The original definition of “hallucinate” was “to seem to see, hear, feel, or smell something that does not exist, usually because of a health condition or because you have taken a drug.” [9] As machine learning models a) do not see, hear, feel, or smell, and b) have no direct experience of what does or does not exist, “hallucination” is a misleading (and anthropomorphizing) term to use in the context of an AI tool; the word assigns to AI a human-likeness that it does not actually possess. Nevertheless, the term “hallucinate” has been broadly adopted. In a major win for AI hype, the Cambridge Dictionary named “hallucinate” the Word of the Year for 2023 [10] and updated the definition [9] to include its use in AI-related contexts.
3 The birth of “frontier AI”
Now, researchers and policymakers with ties to the Effective Altruist movement have coined another new term designed to feed the AI hype cycle: “frontier AI.” In July 2023, researchers with ties to the Future of Humanity Institute (FHI) [11]—an organization famously focused on the “existential risk” that AI supposedly poses to humanity—published a non-peer-reviewed paper [12] on arXiv bestowing a new name on a particular conception of artificial intelligence: “frontier AI.” The paper’s authors define “frontier” AI as “highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety.” The authors of the paper have close ties to the Effective Altruist (EA) movement,Footnote 2 which has promoted a focus on “existential risk” and whose adherents have taken (arguably quite successful) steps to influence government AI policymaking in that direction. [13]
In April 2023, tech investor and entrepreneur Ian Hogarth published a viral opinion piece in the Financial Times [14] speculating that large commercial AI models pose an “existential threat” to humanity and must be reined in. In June 2023, the UK announced that Hogarth would chair the government’s Foundation Model Taskforce. [15] The EA paper was published in July and was apparently presented to the UK prime minister’s office, with the PM’s special advisors on AI and the director and deputy director of the Department for Science, Innovation and Technology (DSIT) in attendance. [16] In September, the UK government both re-named the Foundation Model Taskforce as the “Frontier AI Taskforce” [17] and announced that it would hold an invitation-only “AI Safety” Summit focused specifically on “frontier AI.” The UK government is arguably in the thrall of Effective Altruists and their obsession with “existential risk.” [16] In a matter of months, individuals with strong ties to the AI “doomer” camp were able to get their preferred framing both adopted and prioritized by one of the most powerful governments in the world.
But what is “frontier AI?” What is at stake for society when this new language is adopted to describe AI technologies?
4 Defining “frontier AI”
Frustratingly for our purposes, the definition of “frontier AI” is quite fuzzy. According to the UK government, “frontier AI” consists of “highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models.” [18] Already with this definition, the emphasis is on the speculative future rather than the actual present. “Frontier AI” will “match or exceed” what “today’s most advanced models” are capable of.Footnote 3 The term designates “highly capable foundation models that could possess dangerous capabilities.”Footnote 4 In other words, “frontier AI” is speculative—it does not even exist yet, but the implication is that it is right around the corner.
The idea of soon-to-be massively powerful AI technologies is a key indicator of the hype-inflated worldview inherent in the term “frontier AI.” The people popularizing this term are deeply concerned with what they call “existential risk” and “value alignment.” That is, they fear that a super-powerful AI could someday (perhaps quite soon) go rogue, leading to the destruction of humanity. As a result, they are invested in finding ways to attempt to align the “values” of AI with those of humanity. Often, the very same people making these claims are actually building the technologies they claim to so greatly fear.Footnote 5 [19] Critics have argued that stoking fears of “existential risk” is really just another form of AI hype: if it is powerful enough to kill us all, then it must be really powerful. Focusing on far-future dangers is a means of diverting attention from the regulation-free status quo, which works in favor of the small cadre of companies developing these AI models.
Not all AI experts [20] believe that “existential risk” is a reasonable fear, [21] but there are high-profile people in the field who do, and they have a lot of money to mobilize in service of promoting their views. In fact, the Effective Altruist movement is explicitly focused on earning as much money as possible (so that they can use those funds to “do the most good”). One notable example of how the EA philosophy can fail in spectacular ways is that former billionaire and convicted fraud [22] Sam Bankman-Fried was an EA adherent. Bankman-Fried was a high-profile member of the Effective Altruist community.
As outlined above, the authors of the July paper introducing the term “frontier AI” have links to the Future of Humanity Institute at Oxford. FHI’s founding director is philosopher Nick Bostrom, who is well known for his interest in existential risk and other highly speculative potential futures with AI technologies. Bostrom is a leading figure in what has been dubbed the TESCREAL ideologies [23], a constellation of linked beliefs held by a small group of influential and controversial individuals who are attempting to steer the AI agenda toward the creation of artificial general intelligence (AGI). A significant portion of the concern around “existential risk” hangs on the fear that AGI will achieve self-consciousness and then turn its aims to the destruction of humanity.
Given its origins and the connotations within its definition, anytime someone adopts the term “frontier AI,” they are effectively endorsing this very specific set of beliefs about AI and its capabilities. At the same time, using the term gives credence to the idea that this is a technology that already exists. If we constantly talk about “frontier AI” as though it is a real thing, the fact that the term apparently refers to the potential existence of certain powerful or dangerous AI capabilities becomes an inconvenient afterthought.
5 Connotation vs denotation
The development of new vocabulary is often an exercise in trying to shift the discourse by using words that have not just a denotation—an explicit or direct meaning—but also a particular connotation—what the word implies or suggests. The words we choose to name things are often deeply connected to values and commitments that influence our perceptions of the thing that is named. “Pre-born baby” [24] and “embryo” or “fetus” nominally describe the same thing, but the former connotes a particular worldview and set of values, focused on opposition to abortion. People in the “pro-life” movementFootnote 6 are eager to get others to adopt this vocabulary and framing. Similarly, the TESCREALists behind “frontier AI” are hoping to get this language adopted to advance their own agenda.
So what might “frontier AI” connote? To start, the word “frontier” evokes the American “Wild West” and its colonial mentality: a place where civilized white men venture into the dangerous unknown and use lethal force to dominate and exploit what they find there, for profit. Others [25] have argued [26] that machine learning reproduces colonial logics, and the term “frontier AI” certainly invokes and reinscribes this colonial dynamic. We can also see the colonial dynamic between the handful of powerful Western companies who produce today’s generative AI models and the people of the “Global South” [27] who are most likely to experience harm as a direct result of the development and deployment of these AI technologies.
The UK’s AI Safety Summit agenda apparently argued that the frontier is “where the risks are most urgent” and “where the vast promise of the future economy lies” [28]—marking the frontier as a place of great opportunity, but also great danger. Thus, “frontier AI” becomes a verbal proxy for the “existential risk” that TESCREALists are constantly pushing, with the added implication that such powerful technology can also lead to large profits for those able to harness it.
A technological frontier also calls to mind the glittering expanse above us—space.Footnote 7 TESCREALists like billionaire and SpaceX founder Elon Musk are keen to lead the charge into the stars, around which they plan to build vast computer simulations in which astronomical numbers of digital people will live out happy lives. [29] According to the flavor of utilitarianism known as longtermism (the “L” of “TESCREAL”), due to their overwhelming numbers, the sum total happiness of these hypothetical space-based future digital people dwarfs the well-being of all actually living people on Earth; therefore, ensuring that these future digital souls come into existence is, on their view, a moral duty. The path to bringing those souls into being starts with the development of AGI.
The word “frontier” also implies a fixed boundary, slowly receding in a linear fashion,Footnote 8 as technology makes steady progress into the future, wresting the known from the formerly unknown. This, too, is misleading. Technology does not progress in a steady, linear march. Sometimes it limps along, as in the “AI winter,” [30] until something changes to renew its progress, and sometimes people stand up to refuse a particular technology, halting its “progress” for the sake of values they see as more important. [31]
Profit. Danger. Outer space. Progress. These are the connotations of “frontier AI.” It should be obvious that “frontier AI” is an exercise in AI hype, given these connotations. Today’s AI tools have not yet demonstrated that they have a sustainable business model. [32] The main dangers that “frontier AI” boosters are concerned with are hypothetical “existential risks.” We are nowhere near colonizing outer space. With generative AI threatening the livelihoods of creatives, writers, illustrators, and actors, it is clear that not everyone agrees that today’s AI tools represent progress.
6 Reasons to reject “frontier AI”
We need to push back against the term “frontier AI” before it becomes uncritically adopted by the press and others. It is not clear that the thing TESCREALists want to name with the words “frontier AI” even exists—but if it does, then “highly capable generative AI models,” while less pithy, is perfectly serviceable and much more matter of fact. (I think even “foundation models” would be preferable.)
As we have seen with terms like “foundation model” and “hallucination,” the press and others with an investment in staying up to date with technology are typically eager to adopt the latest terminology so as to appear in the know. (Special language is a way to distinguish insiders from outsiders, as anyone navigating the sea of acronyms at most universities can attest.)
Pushing out the term “frontier AI” is a way for its boosters to both frame the conversation around what they think is important while simultaneously re-brandingFootnote 9 large-scale generative machine learning models to, once again, divorce them from prior criticism. Large-scale generative machine learning models have been shown to cause several types of actual, non-hypothetical harms, including psychological harms [33, 34], social harms [35, 36], and environmental harms [37, 38]. “Frontier AI” encourages us to look past those very real harms in favor of a focus on the hypothetical future.
Fortunately, the same researchers who have been leading the charge to uncover and highlight real AI harms and to develop robust AI governance mechanisms have already begun speaking out against efforts to shift the verbal terrain of AI. In a letter to the editor of the Financial Times responding to Hogarth’s “god-like AI” piece, Mhairi Aitken, Alan Turing Institute Ethics Fellow, wrote, “Words matter, and how we talk about AI has very real implications for how we engage with AI.” [39] Michael Birtwistle, the associate director of law and policy at the Ada Lovelace Institute, is quoted in the Guardian highlighting the hypothetical nature of “frontier AI:” “Policymaker attention and regulatory efforts are concentrated on a set of capabilities that don’t exist yet, a set of models that don’t yet show those capabilities.” [28] Meredith Whittaker, President of Signal, was quoted on the subject of “existential risk” as saying, “I think we need to recognize that what is being described, given that it has no basis in evidence, is much closer to an article of faith, a sort of religious fervor, than it is to scientific discourse.” [40]
We should view the term “frontier AI” with skepticism if not outright suspicion, as yet another Trojan horse of AI hype. If we are to use this term at all, we should follow the standard set by Shannon Vallor and Ewa Luger, researchers and co-principal investigators of the UK government-funded “Bridging Responsible AI Divides” (BRAID) programme. [41] In a blog post excoriating the government’s sole focus on technical expertise as the path to “AI safety,” they correctly append the modifier “so-called” before “frontier AI” and always use the word “frontier” in scare quotes. [42] Journalists covering the current debates around AI should follow this practice; otherwise, they will effectively be endorsing the idiosyncratic views of those they ought to be reporting on with impartiality. Speaking of a hypothetical entity as though it is real puts journalists in the uncomfortable position of reporting on the future as though it has already happened.
“Frontier AI” is AI hype. It is hypothetical, not real. It distracts focus away from AI’s actual harms by focusing on so-called “existential risk.” It carries with it connotations of colonialism and conquest that we should not be endorsing. The sooner everyone stops using this term, the better.
Notes
ChatGPT was only launched to the public in November 2022.
The lead author Markus Anderljung is presently Adjunct Fellow at the Center for a New American Security, funded by Open Philanthropy, which names Effective Altruism as among its focus areas [45] and is also behind a wide-ranging effort to influence American AI policy development [13] to focus on long-term catastrophic risks. Anderljung also had a previous stint seconded to the UK Cabinet Office as Senior Policy Specialist, where he worked to shape the UK’s regulatory approach to AI. The second author, Joslyn Barnhart, is a Visiting Senior Research Fellow at the FHI. The third author, Anton Korinek, is a Research Affiliate at FHI. The fourth author, Jade Leung, and the fifth author, Cullen O’Keefe, are both Research Affiliates at FHI. The sixth author, Jess Whittlestone, and the seventh author, Shahar Avin, are both Senior Research Affiliates at the Centre for the Study of Existential Risk at University of Cambridge. Whittlestone was also being seconded 1 day per week to assist the UK government on AI policy [16] in the lead-up to the AI Safety Summit. The eighth author Miles Brundage was an AI Policy Research Fellow at FHI.
Emphasis mine.
Emphasis mine.
For instance, signatories of the “AI pause” letter included engineers from Meta and Google, as well as Stability AI founder and CEO Emad Mostaque [46].
Yet another name that both signals certain beliefs and commitments and tries to shape discourse and perceptions as a result.
“Space: the final frontier” is the first line of the opening voice-over in several of the “Star Trek” TV series.
I am grateful to Iñaki Goñi for this insight.
Again, “Foundation models” was the first re-brand.
References
Seal, T. (2023) AI Doomers Take Center Stage at the UK’s AI Summit. Bloomberg UK. https://www.bloomberg.com/news/articles/2023-11-01/ai-doomers-take-center-stage-at-the-uk-s-ai-summit. Accessed 18 Dec 2023
Andreessen, M. (2023) Why AI Will Save The World. Substack. https://pmarca.substack.com/p/why-ai-will-save-the-world. Accessed 18 Dec 2023
Bommasani, R., Hudson, D., Adeli, E., et al. (2022) On the opportunities and risks of foundation models. arXiv. https://arxiv.org/pdf/2108.07258.pdf. Accessed 18 Dec 2023
Bender and Gebru, et al. On the dangers of stochastic parrots: can language models be too big?. Conference on fairness, accountability, and transparency (FAccT ’21), March 3–10, 2021, Virtual Event, Canada (2021). https://doi.org/10.1145/3442188.3445922
Field, H. (2021) At Stanford’s “foundation models” workshop, large language model debate resurfaces. Tech Brew. https://www.emergingtechbrew.com/stories/2021/08/30/stanfords-foundation-models-workshop-large-language-model-debate-resurfaces. Accessed 18 Dec 2023
Klein, N. (2023) AI machines aren’t ‘hallucinating’. But their makers are. The Guardian. https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein. Accessed 18 Dec 2023
Stening, T. (2023) What are AI chatbots actually doing when they ‘hallucinate?’ Here’s why experts don’t like the term. Tech Xplore. https://techxplore.com/news/2023-11-ai-chatbots-hallucinate-experts-dont.html. Accessed 18 Dec 2023
Smith, G. (2023) An AI that can "write" is feeding delusions about how smart artificial intelligence really is. Salon. https://www.salon.com/2023/01/01/an-ai-that-can-write-is-feeding-delusions-about-how-smart-artificial-intelligence-really-is/. Accessed 18 Dec 2023
Cambridge Dictionary. https://dictionary.cambridge.org/dictionary/english/hallucinate Accessed 18 Dec 2023
University of Cambridge. (2023) https://www.cam.ac.uk/research/news/cambridge-dictionary-names-hallucinate-word-of-the-year-2023. Accessed 18 Dec 2023
Wikipedia. https://en.wikipedia.org/wiki/Future_of_Humanity_Institute. Accessed 18 Dec 2023
Anderljung, M., Barnhart, J., Korinek, A. et al. (2023) Frontier AI Regulation: Managing Emerging Risks to Public Safety. arXiv. Doi:https://doi.org/10.48550/arXiv.2307.03718
Bordelon, B. (2023) How a billionaire-backed network of AI advisers took over Washington. Politico. https://www.politico.com/news/2023/10/13/open-philanthropy-funding-ai-policy-00121362. Accessed 18 Dec 2023
Hogarth, I. (2023) We must slow down the race to God-like AI. Financial Times. https://www.ft.com/content/03895dc4-a3b7-481e-95cc-336a524f2ac2. Accessed 18 Dec 2023
Gov.uk. (2023) https://www.gov.uk/government/news/tech-entrepreneur-ian-hogarth-to-lead-uks-ai-foundation-model-taskforce. Accessed 18 Dec 2023
Clarke, L. (2023) How Silicon Valley doomers are shaping Rishi Sunak’s AI plans. PoliticoPro. https://www.politico.eu/article/rishi-sunak-artificial-intelligence-pivot-safety-summit-united-kingdom-silicon-valley-effective-altruism/. Accessed 18 Dec 2023
Gov.uk. (2023) https://www.gov.uk/government/news/industry-and-national-security-heavyweights-to-power-uks-frontier-ai-taskforce. Accessed 18 Dec 2023
Gov.uk. (2023) https://www.gov.uk/government/publications/ai-safety-summit-introduction/ai-safety-summit-introduction-html. Accessed 18 Dec 2023
Tucker, I. (2023) Signal’s Meredith Whittaker: ‘These are the people who could actually pause AI if they wanted to’. The Guardian. https://www.theguardian.com/technology/2023/jun/11/signals-meredith-whittaker-these-are-the-people-who-could-actually-pause-ai-if-they-wanted-to. Accessed 18 Dec 2023
Goldman, S. (2023) The thin line between AI doom and hype. VentureBeat. https://venturebeat.com/ai/the-thin-line-between-ai-doom-and-hype-the-ai-beat/. Accessed 18 Dec 2023
Landgrebe, J., Smith, B.: Why machines will never rule the world: artificial intelligence without fear. Routledge, New York (2022)
Sherman, N. and Hoskins, P. ‘Crypto King’ Sam Bankman-Fried faces decades in jail after guilty verdict. BBC. https://www.bbc.co.uk/news/business-67281759 Accessed 18 Dec 2023
Torres, E. (2023) The Acronym Behind Our Wildest AI Dreams and Nightmares. truthdig. https://www.truthdig.com/articles/the-acronym-behind-our-wildest-ai-dreams-and-nightmares/. Accessed 30 Jan 2024
https://preborn.com/. Accessed 18 Dec 2023
Hao, K. (2022) Artificial intelligence is creating a new colonial world order. MIT Technology Review. https://www.technologyreview.com/2022/04/19/1049592/artificial-intelligence-colonialism/. Accessed 18 Dec 2023
Birhane, A., Talat, Z.: Chapter 11: It’s incomprehensible: on machine learning and decoloniality. In: Lindgren, S. (ed.) Handbook of Critical studies of artificial intelligence, pp. 128–140. (2023). https://doi.org/10.4337/9781803928562.00016
https://www.ghostwork.org/. Accessed 18 Dec 2023
Bhuiyan, J.: How the UK’s emphasis on apocalyptic AI risk helps business. The Guardian. https://www.theguardian.com/technology/2023/oct/31/uk-ai-summit-tech-regulation (2023). Accessed 18 December 2023
Torres, E. (2021) Against longtermism. aeon. https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo. Accessed 18 Dec 2023
Wikipedia. https://en.wikipedia.org/wiki/AI_winter. Accessed 30 Jan 2024
(2020) Five Reasons Why Facial Recognition Must Be Banned. Liberty. https://www.libertyhumanrights.org.uk/issue/five-reasons-why-facial-recognition-must-be-banned/. Accessed 18 Dec 2023
Karpf, D. (2022) Money Will Kill ChatGPT’s Magic. The Atlantic. https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-chatbots-openai-cost-regulations/672539/. Accessed 18 Dec 2023
Perrigo, B. (2023) Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic. Time. https://time.com/6247678/openai-chatgpt-kenya-workers/. Accessed 18 Dec 2023
Atillah, I. (2023) Man ends his life after an AI chatbot 'encouraged' him to sacrifice himself to stop climate change. https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate. Accessed 18 Dec 2023
Nicoletti, L. and Bass, D. (2023). Humans Are Biased. Generative AI Is Even Worse. Bloomberg Technology. https://www.bloomberg.com/graphics/2023-generative-ai-bias/. Accessed 18 Dec 2023
Turk, V. (2023) How AI reduces the world to stereotypes. Rest of World. https://restofworld.org/2023/ai-image-stereotypes/. Accessed 18 Dec 2023
Gendron, W. (2023) ChatGPT needs to ‘drink’ a water bottle’s worth of fresh water for every 20 to 50 questions you ask, researchers say. Business Insider. https://www.businessinsider.com/chatgpt-generative-ai-water-use-environmental-impact-study-2023-4?op=1&r=US&IR=T. Accessed 18 Dec 2023
Saenko, K. (2023) Is generative AI bad for the environment? A computer scientist explains the carbon footprint of ChatGPT and its cousins. The Conversation. https://theconversation.com/is-generative-ai-bad-for-the-environment-a-computer-scientist-explains-the-carbon-footprint-of-chatgpt-and-its-cousins-204096. Accessed 18 Dec 2023
Aitken, M. (2023) Letter: AI’s God-like power is a Big Tech narrative that needs calling out. Financial Times. https://www.ft.com/content/fd99e7d2-9c5f-4ccf-b203-936d1528c6cc. Accessed 18 Dec 2023
Heaven, W. (2023) How existential risk became the biggest meme in AI. MIT Technology Review. https://www.technologyreview.com/2023/06/19/1075140/how-existential-risk-became-biggest-meme-in-ai/. Accessed 18 Dec 2023
https://braiduk.org/ Accessed 18 Dec 2023
Vallor, S. and Luger, E. (2023) A shrinking path to safety: how a narrowly technical approach to align AI with the public good could fail. https://braiduk.org/a-shrinking-path-to-safety-how-a-narrowly-technical-approach-to-align-ai-with-the-public-good-could-fail. Accessed 18 Dec 2023
Shevlin, H., Vold, K., Crosby, M., Haline, M.: The limits of machine intelligence. Sci Soc (2019). https://doi.org/10.15252/embr.201949177
Mearian, L. (2023) What are LLMs, and how are they used in generative AI? Computer World. https://www.computerworld.com/article/3697649/what-are-large-language-models-and-how-are-they-used-in-generative-ai.html. Accessed 30 Jan 2024
Open Philanthropy. https://www.openphilanthropy.org/focus/. Accessed 18 Dec 2023
Loizos, C. (2023) 1,100+ notable signatories just signed an open letter asking ‘all AI labs to immediately pause for at least 6 months’. TechCrunch. https://techcrunch.com/2023/03/28/1100-notable-signatories-just-signed-an-open-letter-asking-all-ai-labs-to-immediately-pause-for-at-least-6-months. Accessed 18 Dec 2023
Funding
The author has no relevant financial or non-financial interests to disclose.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Glossary
- Artificial General Intelligence (AGI)
-
A machine that behaves in ways that would be called intelligent if a human were so behaving, in the context of a broad set of tasks and situations. Generally this means showing an intelligence that is flexible, robust, and demonstrates behavior that could be reasonably described as learning, innovating and reasoning [43].
- AI Hype
-
Excessive positive attention lavished on Artificial Intelligence (AI) in the news and media that makes AI seem extremely important or exciting, suggesting that AI is more capable, powerful, or human-like than is warranted. Usually taking the claims of technology companies at face-value rather than applying any critical lens or due diligence to fact check such claims.
- Effective Altruism (EA)
-
A philosophical movement that inherits most of its moral commitments from Utilitarianism, which views “the good” as that which improves the lives of the most people. Effective Altruists want to identify the most efficient ways to help as many people as possible, hence the “effective” in the name. EA also endorses the idea of “earning to give,” which is the principle of taking a job that earns a great deal of money in order to then use that money to fund effective work on pressing problems. Effective Altruism is the “EA” in the “TESCREAL” acronym.
- Existential Risk
-
The term “existential risk” is short-hand for the idea that artificial intelligence could one day pose a threat to the continued existence of humankind. It is important to note that the concept of “existential risk” is highly contested, even among AI experts.
- Foundation Model
-
A new term coined in 2021 to designate large machine learning models. The term “large” here generally means a model with millions, billions or even trillions of parameters. (A parameter is “something that helps [a large machine learning model] decide between different answer choices”) [44].
- TESCREAL/TESCREAList
-
Pronounced “tess-cree-all.” A bunch of interrelated ideologies that together are driving the race to create AGI. TESCREAL stands for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism. “At the heart of TESCREALism is a ‘techno-utopian’ vision of the future. It anticipates a time when advanced technologies enable humanity to accomplish things like: producing radical abundance, reengineering ourselves, becoming immortal, colonizing the universe and creating a sprawling ‘post-human’ civilization among the stars full of trillions and trillions of people. The most straightforward way to realize this utopia is by building superintelligent AGI” [23].
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Helfrich, G. The harms of terminology: why we should reject so-called “frontier AI”. AI Ethics (2024). https://doi.org/10.1007/s43681-024-00438-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s43681-024-00438-1