1 From AI hype to AI hyping

Although AI hype is often framed as simply a fact about our world that affects us, AI hype is actually something that is done [1]. AI hyping is a shared practice that, strictly speaking, involves writing and talking about AI so that people get excited about or interested in it [2]. The emphasis should thus be on the process of hyping, rather than the content of a specific instance of hype. This framing of hyping does not fundamentally presuppose any specific valence on the possibilities; there is both ‘boosterist’ and ‘doomerist’ hyping [3]. Identifying an act of hyping does not require an analysis of the value or validity of claims being made, or the degree to which a speaker is engaged in hyperbole. To the contrary, hyping is concerned simply with promotion and the generation of interest and attention.

Importantly, AI hyping largely consists of generating excitement by claiming a capability or agency for AI systems in ways that cannot be verified or falsified. Headlines claiming AI “may be coming for our jobs” [4], that “industry leaders” are warning “AI poses a ‘Risk of Extinction’” [5], or even that “AI can outperform doctors” [6] cannot be shown to be wrong because they variously focus on some time far in the future or because they mobilize a tempting but ultimately meaningless comparison (e.g. What does it mean to ‘outperform’ doctors? Under what circumstances and in what settings?). Hyping depends on the ability to continue promoting AI, regardless of what may happen in, or be true about, the world.

Additionally, AI hyping is about linking AI to a long current of well-established tropes and narratives: AI narratives reliably oscillate between dystopia and utopia, invoking century-old dreams and fears about intelligent machines [7, 8]. The use of common narratives about technology–AI will save us all! AI will doom us all!–enables the hyping to capture attention more easily, whether to promote or demote AI. Even ambivalent claims, like in the headline “Will AI free us from drudgery– or leave us jobless and hungry?” [9] can serve as an act of hyping since it is designed to capture attention without making a specific claim that could someday be verified or falsified. At the same time, the extreme claims of AI hyping, regardless of whether the hype praises or condemns AI, have had a significant distorting effect on our understanding and debates about AI, and particularly about its social, political, economic, and environmental impacts, as policymakers and pundits react to the hyping, rather than an assessment of likely impacts of AI systems [10]. For example, hyping that suggests AI will replace workers has resulted in increased investigation of policy proposals promoting a universal basic income [11], while hyping that suggests that an omnipotent AI system might imperil all of humanity has led to serious proposals that a ‘kill switch’ be mandated in AI system designs [12].

1.1 Here we go again: generative AI hyping

We take for granted that readers will recognize the characteristics of AI hyping in the context of recently introduced text- or image-based generative AI systems (such as ChatGPT, Midjourney, Bard, Dall-E, or Claude). These systems are the latest round of AI technology that, yet again, is allegedly poised to bring about fundamental changes to the ways we live, work, and relate to one another. In many ways, discussions about generative AI have mirrored past patterns of excitement about new possibilities alongside anxieties around automation and intelligent machines [13]. In fact, critics and scholars have argued that AI hyping is, like hyping of other past technologies, often carefully curated by people in power, precisely to sustain their influence and even their personal benefit [14].

Generative AI systems are claimed to score well on the LSAT and other standardized tests [15], including those assessing divergent thinking [16]; support clinical research and doctor-patient interaction [17]; directly engage with their physical world, such as in the context of manufacturing [18]; and much more. Predictions about the beneficial impacts of generative AIs range from multiple percentage points of economic growth [19] to revolutionary new artistic self-expression [20]. At the same time, concerns and potential harms are rapidly emerging around generative AI systems. These include job displacement [21], biased representations [22, 23], incorrect content or ‘hallucinations’, or even potential national security concerns [24]. Some of these harms are no longer speculative, but real. For example, studies have shown that generative AI perpetuates stereotypical public health tropes [25], sustains racial stereotypes in recruiting [26], and consistently hallucinates when used in legal tasks [27].

One key challenge to serious discussion of AI’s role in society is that the focus of hyping is typically on the technical machinery, rather than the broader sociotechnical systems in which the generative AIs operate [28]. The narrative focus is entirely on the machines’ capabilities and agency, rather than the active, necessary role played by people and communities when these systems are actually deployed and integrated into social life [29]. Any AI system intervenes into already existing social structures if it is useful: a system like ChatGPT depends on users who have prompts they need responses to; an autonomous driving AI needs cars and passengers with destinations; and a medical AI system is useless without doctors and nurses to interpret or implement its outputs in ways that affect patient outcomes.

A narrow focus on the technical system also makes it particularly easy to cast grandiose claims about the dangers of AI. For example, recent years have seen many instances of AI hyping with the narrative of machines becoming too powerful to control. In particular, this dynamic has become very visible as powerful figures in the AI field, including Geoffrey Hinton, a prominent machine learning researcher often considered the “godfather of AI” [30], have emphasized concerns about the “existential risk” of AI (i.e., AI’s potential of eradicating humanity). However, this hyping ignores the broader sociotechnical contexts in which all of these systems are deployed, and as a result, has often led to relatively fatalistic framings of the possibility of existential risk that focus solely on the AIs, even though most existential risks would require additional, non-AI capabilities to have any realistic chance of happening.

Although situated as polar opposites, stories of excitement and of terror are both integral to the practice of AI hyping because they grossly simplify AI narratives and pit them against the realities of AI design and use. These simplifications systematically distract us from the real-world situations in which AI is developed and deployed. And whilst this observation allows us to uncover how the organization and stratification of society are deeply entangled with technology and innovation, it doesn’t necessarily allow us to better understand and respond to the dynamics and widespread effects of AI hyping per se. Understanding and responding to the effects of AI hyping requires teaching both specialists and the public to see the social dimensions that AI hyping omits, from the narratives that sustain hype claims to the missing steps of social integration that are never included in the hype.

1.2 “AI is just math.”

AI hyping presupposes that the technical aspects of artificial intelligence can be separated from its sociocultural, organizational, and psychological aspects. In other words, a key ingredient to AI hyping is perpetuating a narrative that frames AI as a math problem, a statistical model, or simply 1s and 0s [31,32,33]. This framing is visible in assumptions implied in much of the AI hype, such as the assumption that AI will diagnose diseases better than any human because it can better analyze all the medical data we have, or that AI understands the logics of language because it can predict the likeliness of word sequences.

Sometimes, the “AI is just math” frame is used to debunk myths around AI’s power [31]. More commonly, framing AI as exclusively math or code is a key part of AI hyping that privileges the technical aspects of AI over the social and interpersonal contexts, processes, organizations, and practices in which it is embedded (including the activity of hyping itself). AI-as-math essentializes AI in a very specific way that divorces it from social context. And as a result, extreme claims about AI’s capabilities or impacts often go unchallenged when placed in this frame. As long as the mathematical claims of an AI are viewed as plausible, people will under-examine the practical claims of an AI in its context of deployment [34]. And not only that, when an extreme claim about the AI does not come to fruition, for example that ChatGPT will cause widespread unemployment [35], then the blame can always be placed on something that was not the subject of the hype claim, like economic factors or theories about labor and the workforce, none of which are admitted to participate in the mathematical function of the AI itself. Moreover, the fallacious separation of AI from its sociotechnical contexts has been studied in detail by various scholarships, from Science and Technology Studies to the history of science, philosophy of scientific practice, and more recently critical technology studies, among others [36,37,38,39]. And it should be intimately familiar to industry practitioners as they also grapple with the entanglement of technical with the social, albeit to sell products.

The point here is that AI hyping structurally hinders efforts to consider AI as the sociotechnical system it is, and not as a purely technical or a purely social system. If AI is hyped as able to “outperform” doctors, it becomes more difficult to investigate how AI performance can be evaluated in the real-world hospitals and exam rooms where doctors conduct their work. If AI is hyped as someday posing an “existential risk” for humanity, it becomes more difficult to understand the concrete risks faced by those who currently interact with AI systems. When AI hyping dominates narratives and discussions, then it creates barriers to helping people understand the sociotechnical frames within which we should conceptualize AI systems. And if the social dimensions of AI are foreclosed by AI hyping, it becomes difficult to interrogate the hype at all.

One very specific implication is that AI hyping affects how we organize AI education (broadly understood), as well as how we think, research, and teach about the entanglement of social and technical concerns. For example, many AI-relevant degree programs require only (or almost only) technical courses, thereby perpetuating the view that AI is a purely technical product. We thus consider below how pedagogies focused on AI and AI-adjacent disciplines, particularly (but not solely) in technical and engineering fields in higher education (e.g., computer science, electrical engineering, data science, or statistics), can instead be used to meaningfully blunt the impacts of AI hyping through pedagogical innovation that jointly examines the social and the technical, rather than as separate dimensions. First, though, we consider the broader set of approaches to defeating or mitigating AI hyping.

2 Popping the bubble

The act of AI hyping, much like the production of economic bubbles [40], depends on value claims remaining unchallenged and unsubstantiated. For AI hyping, the dissociation of the technical from the social is one way to prevent such challenges. In the following, we discuss the ways a sociotechnical framing for AI claims can pop the bubbles blown by AI hyping. We do so to unwind the effects of AI hyping on education, as well as to implicate educational strategies in combatting AI hyping. For example, one could argue that pushes and pulls of AI hyping occur within a dialectic that oscillates between the ‘AI-as-utopia’ and ‘AI-as-dystopia’ poles. This shapes educational initiatives in important ways, so to address the structural effects of AI hyping on AI education, it makes sense to consider how we can ‘pop’ the AI hyping bubble. At the same time, it is crucial that this popping is done in the right ways; in particular, efforts to directly show that some hype claim is false or implausible are well-intentioned but ultimately misguided for two different reasons.

First, such efforts are doomed to failure by the nature of hype. Hyping, by default, is not about sharing and discussing facts, but about generating excitement and interest. Efforts to falsify claims, however straightforward and broad they may be, simply fall outside of the ontological frame of AI hyping. By its nature, hyping cannot be falsified, whether because any apparent falsification can be explained away or because truth is not the point of hyping.

When we look at AI hyping, we see that it is quite easy to construct hype about the technical side of AI [41]. And if the focus is solely on the technical aspects of AI, then there will never be any reason that one must conclude that something cannot be done; instead, one can always look elsewhere. AI hyping often claims that AI will be better than humans, for example in diagnostics [42, 43] or in recruiting [44]. When this claim fails, then one can simply point to circumstances, such as mismeasurement by medical technicians or misunderstandings by physicians, or incorrect use by recruiters or prevailing bias among hiring managers—or any of a host of other reasons why the AI system fell short in a particular context. There is never any need to lay blame at the feet of the AI system.Footnote 1

Second, and more importantly, the extreme claims made about AI as part of AI hyping privilege technical interpretations of AI rather than broader sociotechnical systems with AI as one component. A good example is a new regulatory emphasis on red-teaming as an AI accountability technique [45]. Red-teaming is a structured attack on an AI system in a controlled environment to detect vulnerabilities, including harmful or discriminatory outputs from an AI system. Here, the ‘locus’ of the vulnerability is thought to be the technical system itself, rather than the broader and continually emerging sociotechnical and sociopolitical aspects of its design and use.Footnote 2  

Efforts to combat AI hyping by disproving (or otherwise attacking) the technical aspects of an AI system ultimately feed and reinforce the idea that AI is “just math” that can be examined independently of context. For example, one should not counter AI hyping around medical diagnostic systems solely by emphasizing their challenges with generalization and statistical bias (though such problems should be pointed out when present), as that response perpetuates the focus on the purely technical. We contend that the best way to counter AI hyping is instead to reorient discussions around the sociotechnical, as it is not possible to be hyping sociotechnical AI systems. Claims about those systems will have sufficient specificity to be falsifiable, and also almost always provide details that ground evaluation of a particular AI system.

3 Other ways of talking and learning about AI

AI hyping and AI education are deeply entangled. On the one hand, AI hyping has led to significant increases in resources: data science and AI degree programs have rapidly proliferated [47] and demand for courses and content about AI has soared [48].

Even the humanities in higher education are seeing some of these additional resources, as Deans and departments find that they can, for instance, get additional personnel by hiring faculty to teach classes such as “Ethics and AI” or “History of Technology.” The broader higher education community is thus, to some degree, a beneficiary of AI hyping. But at the same time, AI hyping has created new headaches in all disciplines. These range from questions about how to use AI in teaching and research to questions around equity in AI access, amongst other issues.

AI education efforts, i.e., efforts to talk and learn about AI across a wide range of contexts, audiences, and goals, provide an interesting example of the challenges to overcome AI hyping, but also offer opportunities to do so. That is, reshaping AI pedagogy provides one way to pop the hyping bubble.

We take a broad view of what counts as pedagogy, which we see as spanning formal education, on the job training, and public discourse, so we cannot hope to give a complete taxonomy of the different ways that pedagogical efforts could divorce themselves from AI hyping. Instead, we consider three distinct pedagogical goals that one might have, and explore the ways that efforts directed at each goal could blunt AI hyping. We also provide examples from our own experiences, as both demonstration of feasibility and anecdotal evidence for the potential success of this approach.

3.1 Goal 1: understanding AI as part of sociotechnical systems and contexts

A straightforward way to address AI hyping is to help people understand the ways in which AI technologies are always integrated into systems that are already sociotechnical in nature. Many papers have underlined the nature of AI systems as sociotechnical, but the point is more general: all of our environments are already sociotechnical [49]. In other words, AI is not the thing that makes our world sociotechnical; rather, AI intervenes in an already sociotechnical environment. If people can be encouraged to understand that AI is never deployed in isolation but always within a sociotechnical world, then they will potentially be better able to recognize AI hyping as hopelessly underspecified. Ideally, students would learn to recognize any continuities between the pre-AI and post-AI sociotechnical systems, as it can be quite informative to see what has remained relatively stable. However, the first step is simply to learn to look beyond the technology itself.

For example, consider the student success prediction systems that many universities are interested in developing or purchasing. These AI systems will be integrated into highly technologized and data-driven education environments in which students are constantly surveilled by learning analytics platforms, student retention algorithms, and educational technology in the classroom. Explicit teaching about this context can help students to counter AI hyping that claims these success prediction systems will be “transformative” or “empowering,” as the students will ideally come to recognize that those claims are simply empty. More positively, such instruction may help them understand how existing sociotechnical systems and contexts are deeply relevant for the functions and impacts of AI systems. Much depends on the details about how data is collected, how the predictions are used, who has access to them, how success is defined for purposes of the AI, and so forth. Of course, this type of understanding is not a fool-proof defense against AI hyping, but it can help people appropriately constrain its influence on their own understandings of the role of technology in society.

One example of this approach is in the data science courses (both undergraduate and graduate) taught by the authors. In these courses, students learn about various contexts and aspects of sociotechnical systems, and how to identify, for some AI, the relevant parts of those systems. More importantly, students learn to determine (and describe) beneficial and problematic sociotechnical systems for a given piece of AI technology. That is, in addition to learning to describe the sociotechnical systems for some AI, they also learn to identify opportunities to shape the sociotechnical system itself so that the AI acts in more beneficial ways. Anecdotally, students are significantly more skeptical about AI hyping by the end of a course, partly because they come to understand that claims about “the technology” are always relatively content-less.

Another example is public engagement and outreach conducted by one co-author, particularly through a public speakers series that brings together various experts on a distinct topic that intersects with AI. Here, the topical focus serves as a frame (e.g., “Agriculture” or “Security” or “Games”) and the expert guests are asked to prepare short provocations that speak to the connection between AI and the topical focus from their point of view. Inevitably, context and existing sociotechnical systems are brought to the fore. For example, the ways in which the agriculture industry is already data-driven, but also how agricultural practices sometimes clash with AI-based machinery and processes. Often, both audiences and guests walk away with more questions than they had before, indicating an intervention had been made to push beyond AI hyping and towards discourse.

Finally, the historical entanglement of technology with existing social structures of politics and ideology has been the focus of work by two of the co-authors with an organization that takes young professionals to Holocaust sites in Germany and Poland. The goal of these study trips is to help participants grapple with the complicity of professionals in that genocide, and how these acts of complicity were permissible under certain framings of “ethics.” The hope is that this understanding can help participants start to recognize the ethical challenges that arise in their own professional lives. Understanding how ethics, design, and technology were part of a single dynamic in this way is thus a fruitful way of engaging the genealogies and contexts of technology and, importantly, technology innovation and progress.

3.2 Goal 2: recognizing the open-endedness of AI development

A second approach to addressing AI hyping by way of shifting AI pedagogies is to help people realize that the creation of a sociotechnical AI system involves many choices that depend on ethical, societal, psychological, legal, or other types of factors.

As a simple example, for almost every possible use, there are multiple measures and multiple loss functions for which one could optimize a machine learning system. Student success could be understood as average grades, likelihood of enrolling in the next term, expected salary upon graduation, self-reported satisfaction, or many other targets. And even given a target, one might aim to minimize expected error, minimize variance of the error, maximize generalization behavior, or many other goals. The key point is that these choices are not determined by technical considerations, but rather by the needs, interests, politics, and values of the developers and other stakeholders who are allowed to partake in the AI design process [50]. More generally, one can aim to help people understand that many, perhaps even most, of the choices in AI development involve non-technical factors and goals.

The practice of AI hyping frequently depends on people believing in the near-inevitability of the technology. If the AI benefits or harms might or might not happen depending on choices throughout development, then the demands for our attention are much less compelling. One of the co-authors explicitly emphasizes, in classes, capstone projects, and a two-week summer research experience for undergraduates, the importance of non-technical choices. For example, students in all of these contexts are required to first identify all of the choices that must be made across the lifecycle of a data science project, and then explain, for each choice, the technical and non-technical constraints on that choice. This approach has been adopted by some of the other capstone project mentors at the co-author’s institution, and small-N comparisons of the resulting projects with those performed by groups that did not adopt this methodology suggest it is associated with more grounded, less hype-prone work.Footnote 3

More generally, AI hyping is typically framed as though people, including developers, have little-to-no agency in what is created. The well-known protest amongst Google workers against Project Maven [51] is an exception that proves the rule; the protest was notable precisely because developers so rarely push back about non-technical aspects of a project. However, as anyone who has ever built an AI system is acutely aware, the creators’ choices have an enormous impact on the success or failure of the system in practice. Hence, one of the key aspects of AI hyping can be undermined if people come to recognize the active role played by researchers and developers.

In contrast to some common views of technological development, technologists– researchers, developers, and engineers who construct AI systems– do not reveal the capabilities of, or the “animal spirits” motivating, technology (cf [52]). Rather, they play an active role in constructing the goals, capacities, and specific integrations of technological systems. Technologists are also subject to the processes at the heart of hyping; their objectives are often set by particular instances of AI hyping, and are resourced accordingly. For example, in 2023, Google’s parent company Alphabet increased its R&D expense by $6 billion from the previous year to be at the forefront of generative AI [53].

When corporate strategists buy into the hype around a capability—e.g., that generative AI will soon find application in every business domain (e.g [54])—they then reshape the active priorities of technologists working in their companies, perhaps shifting focus away from non-generative AI applications. They also reshape the work of non-technical workers in their companies, as compliance offers, legal counsel offices, user experience designers, and sales engineers shift their activities to understanding and enacting the hype claim (or in the case of dystopian hype, forestalling that claim).

This tendency in industry can make AI hyping seem like an inevitability, or a self-fulfilling prophecy. But analyzing and understanding the social dimensions of technological development in industry– i.e., understanding the tech industry as a sociotechnical systemin its own right– is crucial for popping the AI hype bubble. This approach also has a pedagogical dimension within the AI development process. Teaching technologists, students who will soon be technologists, and the public, about how corporate priorities are set, translated into objectives and key results, and eventually shape concrete systems people interact with is necessary background knowledge for discerning the limits of hype. And incorporating practices that embed a consideration of the social impacts of an AI product or system, for example impact assessments [55, 56] or consequence scanning [57], into the development process bring a consideration of the social to the already sociotechnical systems developers produce. Hyping claims of AI systems’ capabilities becomes more difficult, and less hyperbolic, in the face of the practicalities of companies who must make concrete investments like engineers’ labor time or cloud computing hours needed for training an AI system, and how those investments might be otherwise. The relationship between AI hyping and how people might ultimately be affected by AI systems depends on decisions made in corporate boardrooms, not in the inevitable unfolding of technological development in any of the directions AI hyping would suggest.

3.3 Goal 3: pedagogically engaging policymakers

AI hyping motivates policymakers and others to provide resources, take steps towards or away from AI governance, and otherwise impact the ways that organizations and governments respond to AI. That means that policymakers are involved in AI hyping too, and that they can potentially be motivated towards the hypers’ desired ends.

At the same time, policy typically does not focus on the individual case, but rather on the more general type: we do not regulate a specific hiring algorithm, but rather provide regulations for all hiring algorithms [58, 59]. Hence, the previous approaches may not be appropriate, as they emphasized the context- and use-specificity of an AI system, and thereby focused on the individual. Different approaches are required to help policymakers be resistant to AI hyping. These approaches must provide sufficient background knowledge, but also be attuned to the fact that policymakers need to act at a meta level. That can mean that AI pedagogy interactions with policymakers may most meaningfully occur on a case or issue basis (such as the question of AI’s impact on labor law).

Here, working to reach the two pedagogical goals that we have outlined above - understanding AI as part of already existing socio-technical systems and context, and recognizing the open-endedness of AI development - can have positive knock-on effects of AI policy and governance. Taking on a sociotechnical perspective in research and in the classroom can equip academic experts with the arguments and case studies they need to help policymakers understand both the technical intricacies of AI, as well as their socio-technical histories and contexts. In advisory roles, for example on advisory committees or in testimonies, academics can (and the co-authors have) engage directly with questions and concerns that policymakers may have. We have also participated in the production of policy briefings and directly engaged with policymakers as “trusted experts.” Related efforts include becoming trusted partners in peer learning environments.Footnote 4 In other words, there are many opportunities to bring a sociotechnical anti-hype pedagogy to policy-makers, though the direct impacts of those efforts are, and will continue to be, exceedingly difficult to measure given the complex processes that result in policies, laws, and regulations.

4 Conclusion

In this commentary, we have tackled the common assumption that AI hype is a social fact and suggested that, instead, it is best understood as a shared practice involving many stakeholders.

We have outlined how this shared practice is continually divorcing the technical from the social in AI, and how this gets us stuck in a loop of polarized narratives that prevent a more meaningful dialogue on AI and its impact. We have suggested to flip that script and reorient AI discussions around the socio-technical, stipulating that AI hyping and accounts of AI as deeply socio-technical are mutually exclusive.

Cued in by that, we have proposed that AI education and new ways of framing AI pedagogy can help mitigate the harmful effects of AI hyping. Specifically, we have outlined three goals that new pedagogical efforts should strive for: understanding AI as part of already existing socio-technical systems and context, recognizing the open-endedness of AI development, and engaging policymakers.

We hope that taking these steps, especially among educators, will help enable people to be less implicated in AI hyping and facilitate a more productive and equitable dialogue on the ways that AI systems impact people’s lives both now and in the near future.

We want to underline that there are already many efforts underway across academia, industry, government, and independent research institutes to help technologists and technology students (whether that is in computer science, data science, or other AI-related fields) grapple with the sociotechnical nature of AI, with useful outputs for this purpose (e.g [61,62,63]). There is plenty of attention and investment in the STEM field with regards to “data ethics”, “technology ethics” or “public interest technology.” However, non-STEM fields and disciplines are often left behind, even though they can be more intimately familiar with the technology’s deeply social nature. Our hope is that addressing AI hyping, and complicity with AI hyping, will in part result in a renewed attention and investment into the social sciences and humanities. Taking the sociotechnical seriously in AI, and using that as a way to address AI hyping, cannot come at the cost of bleeding out historically non-technical forms of education and expertise. That approach also would constitute complicity with AI hyping and divorcing of social and technical aspects. And that would mean that AI hyping was still out there for us to tackle.