1 Introduction

Technology is created by humans. Humans, therefore, must be in control of the trajectory of technology development, right? A classic view, parroted by Dennett (2017), is that digital technology and software are examples of “top-down intelligent design.” Dennett cites the Internet as an “obvious recent example” of such an “intelligently designed” system and contrasts it with natural systems, such as humans, who have evolved in a Darwinian way. An intelligent design should yield intended and expected behaviors. But this has not been the case with digital technology and certainly not with the Internet. Why?

The momentous AI earthquake that surfaced in the form of ChatGPT in late 2022 undermines the intelligent design thesis. Any illusion that humans are in control has been shattered. ChatGPT is based on GPT-3.5, a large language model (LLM) from OpenAI. Other examples that emerged around the same time include Google’s Bard and Microsoft’s Sydney (attached to the Bing search engine). I have yet to encounter any scientists, even experts in machine learning, who are not surprised by the astonishing linguistic capabilities of these LLMs. As expressed in Kissinger et al. (2023), “[t]he ability of large language models to generate humanlike text was an almost accidental discovery.” Further, “it turns out that the models also have the unexpected ability to create highly articulate paragraphs, articles, and in time perhaps books” (emphasis added). Everyone was surprised, and the whole world, even the top experts, continues to watch with fascination as the machines dance in unexpected ways (Bubeck et al., 2023).

This chapter focuses on the question of whether and how humans control technology development. This question is urgent because of rapid changes that bring both opportunities and risks.

2 Fear

Rapid change breeds fear. With its spectacular rise from the ashes in the last 15 years or so, we fear that AI may replace most white-collar jobs (Ford, 2015); that it will learn to iteratively improve itself into a superintelligence that leaves humans in the dust (Barrat, 2013; Bostrom, 2014; Tegmark, 2017); that it will fragment information so that humans divide into islands with disjoint sets of truths (Lee, 2020); that it will supplant human decision-making in health care, finance, and politics (Kelly, 2016); that it will cement authoritarian powers, tracking every move of their citizens and shaping their thoughts (Lee, 2018); that the surveillance capitalists’ monopolies, which depend on AI, will destroy small business and swamp entrepreneurship (Zuboff, 2019); that it “may trigger a resurgence in mystic religiosity” (Kissinger et al., 2023); and that it will “alter the fabric of reality itself” (Kissinger et al., 2023).

You might hope that the scope of the AIs will be limited, for example, just giving us better search engines. It is not clear, however, where the limits are or even whether there are any. For example, a previously prevalent presumption that AI would be incapable of creativity was also shattered in 2022 by text-to-image generators such as DALL-E 2 from OpenAI, Stable Diffusion from Stability AI, and Midjourney from the research lab with the same name. These text-to-image tools showed how AIs could absorb stylistic influences, as all human artists do, and then synthesize original works informed by these styles. Together with the LLMs, these technology releases have led to a massive cultural shift in the public understanding of the role that AI will have in our society and have spurred a gold rush to develop more AI tools.

3 Pushback

The AI researchers who were developing these tools were seeing a relatively gradual evolution of capabilities (Heaven, 2023), but even they have been surprised by the outcomes. Because of their expertise in the technology, they were not surprised to be surprised. They gradually came to expect surprises, but the rest of us were caught off guard. The public instead witnessed an explosive revelation that contorted expectations.

Many intellectuals have attempted to dismiss the technology as a passing fad. A common claim was that the AIs do not truly “understand” the way humans do. But do we understand how humans understand? Chomsky et al. (2023) state “we know from the science of linguistics and the philosophy of knowledge that [the LLMs] differ profoundly from how humans reason and use language.” How much do we actually know about how humans reason and use language? Other critics say the LLMs perform a glorified form of plagiarism, ignoring the fact that almost all human expression is also a reworking of concepts and texts that have been uttered before.

Many of these criticisms are implicitly comparing the AIs to ideal forms of intelligence and creativity that are fictions. In these fictions, an intelligence works with true facts and with logic (what Kant called “pure reason”), and creativity produces truly novel artifacts. But we have no precedents for such intelligence or creativity. It does not exist in humans nor in anything humans have created. Perhaps the AIs have in fact achieved human-level intelligence, which works not with true facts but rather with preconceptions (Kuhn, 1962), works not with logic as much as with intuition (Kahneman, 2011), and rarely produces anything truly novel (and when it does, the results are ignored as culturally irrelevant). Could it be that these AIs tell us more about humans than about machines?

Janelle Shane, an AI researcher, writes in her book, You Look Like a Thing and I Love You, that training an AI is more like educating a child than like writing a computer program (Shane, 2019). Computer programs, at their lowest level, specify algorithms operating on formal symbols. The symbols are devoid of meaning, except in the mind of human observers, and the operations follow clearly defined rules of logic. Deep neural networks (DNNs), however, exhibit behaviors that are not usefully explained in terms of the operations of these algorithms (Lee, 2022). An LLM is implemented on computers that perform billions of logic operations per second, but even a detailed knowledge of those operations gives little insight into the behaviors of the DNNs. By analogy, even if we had a perfect model of a human neuron and structure of neuron interconnections in a brain, we would still not be able to explain human behavior (Lichtman et al., 2014).

Surely, today, we still retain a modicum of control. At the very least, we can still pull the plug. Or can we? Information technology already pervades our financial markets, our transportation systems, our distribution of goods, and our information feeds, and, increasingly, those IT systems integrate AI. What would happen if we were to suddenly shut down all those AIs? I suspect the results would not be pretty. Giving us pause, Albert Einstein famously said, “we cannot solve our problems with the same thinking we used when we created them.”

4 Information Flood

Knowledge is at the root of technology, information is at the root of knowledge, and today’s technology makes information vastly more accessible than it has ever been. Shouldn’t this help us solve our problems? The explosion of AI feeds the tsunami, turning every image, every text, and every sound into yet more information, flooding our feeble human brains. We can’t absorb the flood without curation, and curation of information is increasingly being done by AIs. Every subset of the truth is only a partial truth, and curated information necessarily includes only a subset. Since our brains can only absorb a tiny subset of the flood, everything we take in is at best a partial truth. The AIs, in contrast, seem to have little difficulty with the flood. To them, it is the food that strengthens, perhaps leading to that feared runaway feedback loop of superintelligence that sidelines humans into irrelevance. The LLMs, for example, have demonstrated considerable expertise in law, mathematics, computer programming, and many other disciplines, displaying a breadth of knowledge no human can match.

5 Digital Creationism

The question I address in this chapter is “are we in control?” First, in posing this question, what do we mean by “we”? Do we mean “humanity,” all eight billion of us? The idea of eight billion people collectively controlling anything is patently absurd, so that must not be what we mean. Do we mean the engineers of Silicon Valley? The investors on Wall Street? The politicians who feed off the partial truths and overt lies? The large corporations that own the computers? Each of these possibilities yields sufficient concerns that even a positive answer to the quest “are we in control?” may not be reassuring.

Second, what do we mean by “control”? Is it like steering a car on a network of roads, or is it more like steering a car while the map emerges and morphs into unexpected dead ends, underpasses, and loops? If we are steering technology, then every turn we take changes the terrain we have to steer over in unexpected ways.

In my recent book (Lee, 2020), I coin the term “digital creationism” for the idea that technology is the result of top-down intelligent design. This principle assumes that every technology is the outcome of a deliberate process, where every aspect of a design is the result of an intentional, human decision. That is not how it happens. Software engineers are more the agents of mutation in a Darwinian evolutionary process. The outcome of their efforts is shaped more by the computers, networks, software tools, libraries, programming languages, and other programs they use than by their deliberate decisions. And the success and further development of their product is determined as much or more by the cultural milieu into which they launch their “creation” than by their design decisions.

6 Coevolution

The French philosopher known as Alain (whose real name was Émile-Auguste Chartier) wrote about fishing boats in Brittany:

Every boat is copied from another boat. … Let’s reason as follows in the manner of Darwin. It is clear that a very badly made boat will end up at the bottom after one or two voyages and thus never be copied. … One could then say, with complete rigor, that it is the sea herself who fashions the boats, choosing those which function and destroying the others. (Rogers & Ehrlich, 2008)

Boat designers are agents of mutation, and sometimes, their mutations result in a badly made boat. From this perspective, perhaps Facebook has been fashioned more by teenagers than by software engineers. The development of the LLMs has very clearly followed an evolutionary model, where many mutations along the way were discarded as ineffective.

More deeply, digital technology coevolves with humans. Facebook changes its users, who then change Facebook. The LLMs will change us. For software engineers, the tools we use, themselves earlier outcomes of software engineering, shape our thinking. Think about how IDEsFootnote 1 (such as Eclipse, IntelliJ, or Visual Studio Code), code repositories (such as GitHub), message boards (such as Stack Overflow), libraries (such the Standard Template Library), programming languages (Scala, Rust, and JavaScript, e.g.), Internet search (such as Google or Bing), and, now, LLMs that write computer programs (like ChatGPT and its descendants) affect the outcome of our software. These tools have more effect on the outcome than all of our deliberate decisions.

7 Regulation

Today, the fear and hype around AI taking over the world and social media taking down democracy have fueled a clamor for more regulation (see the two chapters of Rotenberg, as well as Müller and Kettemann). But how to regulate technology depends heavily on whether it is intelligently designed or it coevolves. Why have privacy laws, with all their good intentions, done little to protect our privacy? They have only overwhelmed us with small-print legalese and annoying popups giving us a choice between “accept our inscrutable terms” and “go away.” Do we expect new regulations trying to mitigate fake news or to prevent insurrections from being instigated by social media to be any more effective?

Under the principle of digital creationism, bad outcomes are the result of unethical actions by individuals, for example, by blindly following the profit motive with no concern for societal effects. Under the principle of coevolution, bad outcomes are the result of the “procreative prowess” (Dennett, 2017) of the technology and its applications. Technologies that succeed are those that more effectively propagate. The individuals we credit with (or blame for) creating those technologies certainly play a role, but so do the users of the technologies and their whole cultural context. Under this perspective, Facebook users bear some of the blame, along with Mark Zuckerberg, for distorted elections. They even bear some of the blame for the design of Facebook software that enables distorted elections. If they were happy to pay for social networking, for example, an entirely different software design may have emerged. How the LLMs get integrated into our culture will depend on more than the designers of their software.

Under digital creationism, the purpose of regulation is to constrain the individuals who develop and market technology. In contrast, under coevolution, constraints can be about the use of technology, not just its design and the business of selling it. The purpose of regulation becomes to nudge the process of both technology and cultural evolution through incentives and penalties. Nudging is probably the best we can hope for. Evolutionary processes do not yield easily to control because the territory over which we have to navigate keeps changing.

Perhaps privacy laws have been ineffective because they are based on digital creationism as a principle. These laws assume that changing the behavior of corporations and engineers will be sufficient to achieve privacy goals (whatever those are for you). A coevolutionary perspective understands that users of technology will choose to give up privacy even if they are explicitly told that their information will be abused. We are repeatedly told exactly that in the fine print of all those privacy policies we don’t read, and, nevertheless, our kids get sucked into a media milieu where their identity gets defined by a distinctly non-private online persona.

8 Feedback

As of 2023, the LLMs such as ChatGPT have been trained on mostly human-written data. However, it seems inevitable that the LLMs will be generating a fair amount of the text that will end up on the Internet in the future. The next generation of LLMs, then, will be trained on a mix of human-generated and machine-generated text. What happens as the percentage of machine-generated text increases? Feedback systems are complicated and unpredictable. Shumailov et al. (2023) show that such feedback learning leads to a kind of “model collapse,” where original content (the human-written content) is forgotten.

If technology is defining culture while culture is defining technology, we have another feedback loop, and intervention at any point in the feedback loop can change the outcomes. Hence, it may be just as effective to pass laws that focus on educating the public, for example, as it is to pass laws that regulate the technology producers. Perhaps if more people understood that Pokémon GO is a behavior-modification engine, they would better understand Niantic’s privacy policy and its claim that their product, Pokémon GO, has no advertising. Establishments pay Niantic for placement of a Pokémon nearby to entice people to visit them (Zuboff, 2019). Perhaps a strengthening of libel laws, laws against hate speech, and other refinements to first-amendment rights should also be part of the remedy. The LLMs create a whole new set of challenges, since they readily generate entirely convincing fictions. Rather than naïvely attempt to suppress the technology, a strategy that rarely works, we need to learn to use it intelligently.

9 Actions

I believe that, as a society, we can do better than we are currently doing. The risk of an Orwellian state (or, perhaps worse, a corporate Big Brother) is very real. It has already happened in China. We will not do better, however, until we abandon digital creationism as a principle. Outlawing specific technology developments will not be effective, and breaking up monopolies could actually make the problem worse by accelerating mutations. For example, we may try to outlaw autonomous decision-making in weapons systems and banking, but as we see from election distortions and Pokémon GO, the AIs are very effective at influencing human decision-making, so putting a human in the loop does not necessarily help. How can a human who is effectively controlled by a machine somehow mitigate the evilness of autonomous weapons?

When I talk about educating the public, many people immediately gravitate to a perceived silver bullet, that we should teach ethics to engineers. But I have to ask, if we assume that all technologists behave ethically (whatever your meaning of that word), can we conclude that bad outcomes will not occur? This strikes me as naïve. Coevolutionary processes are much too complex.

10 Conclusions

Technology is not the result of top-down intelligent design. It is the result of a coevolutionary process, where the role of humans is more like agents of mutation than intelligent designers. Returning to the original question, are we in control? The answer is “not really,” but we can nudge the process. Even a supertanker can be redirected by gentle nudging.

Discussion Questions for Students and Their Teachers

  1. 1.

    Using your favorite large language model (such as ChatGPT), ask it to summarize a book you have recently read, and critique its response.

  2. 2.

    Collect some mistakes made by your favorite large language model (such as ChatGPT) either by experimenting with it or by finding articles or blog posts about such mistakes. Discuss how these mistakes resemble or do not resemble mistakes a human might make. See, for example, Bubeck et al. (2023).

  3. 3.

    Estimate the number of people who were involved in the development of a favorite online technology of yours. Consider not only the designers working on the software but also the people who contributed to the underlying technology. How does this affect the ability to pin the blame on individuals for bad outcomes?

Learning Resources for Students

  1. 1.

    Lee, E. A. (2020). The Coevolution: The Entwined Futures of Humans and Machines. Cambridge, MA, MIT Press.

    This (open-access) book addresses the question of whether humans are defining technology or is technology defining humans. I argue from several vantage points that we are less in control of the trajectory of technology than we think. Technology shapes us as much as we shape it, and it may be more defensible to think of technology as the result of a Darwinian coevolution than the result of top-down intelligent design. Richard Dawkins famously said that a chicken is an egg’s way of making another egg. Is a human a computer’s way of making another computer? To understand this question requires a deep dive into how evolution works, how humans are different from computers, and how the way technology develops resembles the emergence of a new life form on our planet. You could start by reading the concluding chapter, Chapter 14.

  2. 2.

    Wilson, D. S. (2007). Evolution for Everyone: How Darwin’s Theory Can Change the Way We Think About Our Lives, Delacorte Press.

    This book argues that Darwinian evolution pervades nearly everything in the world, not just biology but also economics, sociology, science, etc. The author calls himself an “evolutionist,” and, although he does not specifically address technology, it is not hard to see how to apply his principle to technology development.

  3. 3.

    Shane, J. (2019). You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It’s Making the World a Weirder Place, Voracious.

    This book was published before the ChatGPT revolution, but nevertheless offers tremendous insights into AI. Written by an AI researcher, this book gives a wonderful analysis of the quirky behaviors of deep neural networks. It reinforces the observation that AI researchers came to expect to be surprised by the behavior of the AIs.

  4. 4.

    Kahneman, D. (2011). Thinking Fast and Slow. New York, Farrar, Straus and Giroux.

    This classic book is essential reading for understanding the difference between rational and intuitive thinking in humans. It therefore sheds light on the aspirational goal of AIs capable of rational thinking. Kahneman won the Nobel Prize in Economics for his work on prospect theory, which overturns the classic economists’ utility theory. Utility theory posits rational, objective, and proportional behavior. Prospect theory modifies this to account for two systems of cognition, systems 1 and 2, where the first reacts quickly and intuitively and second handles rational, logical thought. The first introduces many distortions, such as over-valuing highly improbable returns and over-estimating risk.

  5. 5.

    Lee, E. A. (2022), “What Can Deep Neural Networks Teach Us About Embodied Bounded Rationality,” Frontiers in Psychology, vol. 25, April 2022, doi: 10.3389/fpsyg.2022.761808.

    This open-access paper analyzes the differences between human thinking and the functions of deep neural networks, claiming that DNNs resemble Kahneman’s “fast” thinking in humans more than the “slow” rational thinking ideal.