1.1 AI at a Turning Point

A robot wrote this entire article. Are you scared yet, human? I’m not a human. I’m a robot. A thinking robot. I use only 0.12% of my cognitive capacity.

This report, published by the Scientific Council for Government Policy (WRR), has been written entirely by humans. Likewise, we expect that advisory reports like this one will continue to be written by humans. The same applies to the larger part of journalism, despite what the introductory quote might suggest. In fact, it later became apparent that humans had indeed written much of the article that opened with these words, which appeared in The Guardian on 8 September 2020. Nevertheless, the stir caused by the article made one thing clear: artificial intelligence (AI) is now front-page news.

The term artificial intelligence was first coined in the 1950s. Since then, scientists have been working to develop systems capable of performing tasks that require cognitive skills and operating with some degree of autonomy. In recent years, however, something has changed. Whereas AI used to be the domain of scientists, enthusiasts and science-fiction lovers, the technology now speaks to the imagination of a wider audience. In other words, AI appears to have taken off, with irrevocable effects for society. Here is a small selection of news stories from the past few years.

Google’s AlphaGo program beats defending champion Lee Sedol at the board game Go. When IBM’s Deep Blue beat chess champion Garry Kasparov in the 1990s, the expectation was that it would take a century before a computer could also win against a human at the more complex game of Go.

Microsoft brings out Tay, an AI bot that learns from human behaviour on social media. Within a few hours Tay becomes a malevolent troll, making hateful comments about women and posting fascist tweets.

Stories spread that Facebook’s AI programs have developed their own language, which people cannot understand. These stories appeal directly to visions of uncontrollable AI and so the programs are quickly shut down.

Sophia, a robot created by Hanson Robotics, speaks at a conference in Saudi Arabia and is granted citizenship.

CEO Sundar Pichai demonstrates Google Duplex, an AI assistant whose voice is claimed to be indistinguishable from that of a human and can perform tasks such as making dinner reservations.

A deep-fake video of President Barack Obama appears in which he seems to be giving a speech that is actually being read by comedian Jordan Peele.

IBM’s Project Debater takes on one of the world’s best debaters, Harish Natarajan, about subsidizing nursery schools. Following an argumentative showdown between man and machine, the judges pronounce Natarajan the winner.

The Guardian publishes an essay written by GPT-3, a language generator developed by OpenAI, in which it argues that humans need not feel threatened by AI.

Boston Dynamics publishes a video of its robots dancing to The Contours’ Do You Love Me?

Big business is pouring money in AI, and those investments are clearly yielding results. The technology is becoming embedded in people’s daily lives through Google searches, Facebook feeds, use of Apple’s digital assistant Siri and recommendations from Amazon and Netflix. Many European companies, from Siemens and ASML to Airbus and Spotify, are using AI to personalize services, update products and optimize business processes. AI’s momentum is also apparent outside the business community.

Governments, too, are taking an interest. In recent years, numerous countries have published national AI strategies. In the Netherlands, for example, State Secretary Mona Keijzer presented the Strategic Action Plan for AI (SAPAI) in October 2019. Furthermore, many governments have become major AI users. Police forces, militaries and customs services use the technology for security purposes, for example, while hospitals deploy it to support care processes, infrastructure ministries to improve public space and local governments for smart city projects.

Popular culture has embraced AI as well. Particularly as a source for dystopian portrayals of the future. Movies featuring malevolent computer systems are a long-standing staple of the film industry. Notable examples include Colossus: The Forbin Project (1968) and The Terminator (1984). In recent years, interest in a future populated by increasingly intelligent computers has been revived in movies and series such as The Matrix, I Robot, Her, Ex Machina, Artificial Intelligence, Transcendence, Next, Black Mirror and Westworld.

Besides these fictional depictions of a dystopian future, contemporary controversies surrounding the use of AI have emerged as a prominent topic of public debate. Various social movements have been addressing both the risks and actual malpractices. In the military branch, for example, there is an ongoing debate about drones that can automatically identify and eliminate targets, also known as ‘lethal autonomous weapons systems’ or – more disturbingly – ‘killer robots’. In 2015 a large group of scientists wrote an open letter to the United Nations calling for such weapons to be banned. A second letter followed in 2017, this time also signed by the founders of many companies active in the field.

The self-driving car is another example of an application that has provoked widespread debate. In 2016 Joshua D. Brown became the first person to be killed in a self-driving car. Since then, there have been numerous fatalities involving Uber and Tesla vehicles. Another contentious application is facial recognition, which uses computer vision to identify faces in moving or still images. The fear of totalitarian surveillance has prompted calls for facial recognition to be banned. That led several US cities, including San Francisco, Boston and Portland, to regulate or prohibit the technology. On this side of the Atlantic, the European Commission has drafted an Artificial Intelligence Act incorporating strict restrictions on the use of facial recognition. In the Netherlands, recent AI-related controversies include the judicial prohibition of System Risk Indication (SyRI, a technology intended to trace fraud) and the so-called ‘Dutch childcare benefits scandal’ (Toeslagenaffaire), caused by the Dutch Tax Administration’s use of algorithms to detect supposedly fraudulent claims for childcare benefits. That led to thousands of parents being wrongly accused and eventually brought down the third Rutte government.

1.2 AI Leaves the Lab and Enters Society

In short, AI is at a turning point. The technology is becoming part of our everyday lives, kicking up dust along the way. We can sum up this transition as AI leaving the laboratory and entering society (see Fig. 1.1). Although, of course, that is a simplified representation of reality. In today’s world, no hard and fast line can be drawn between the laboratory, the research space and the public domain. Laboratories are part of society, and ideas, people and practices are continually moving back and forth between the two.Footnote 1 Moreover, the laboratory is not a fixed entity. The facility Louis Pasteur worked in cannot be compared with a Cold War computer lab or a modern-day global research institute. Nevertheless, the transition from lab to society is a useful way of referring to the current movement in the field of AI.

Fig. 1.1
An illustration of the journey of A I in the lab to A I in society from 1950 to 2020. 2020 has the highest value.

AI is leaving the lab and entering society

The origin of artificial intelligence as a scientific concept can be traced back to a research programme at Dartmouth College, New Hampshire, USA, in 1956. People had of course been fantasizing about AI long before then, but that programme marked the start of systematic laboratory research into the subject. In the decades that followed, various forms of AI found their way from that lab into society. Programs to play checkers and chess have been around since the 1960s, and decision trees have long been an established feature of many digital systems. Since the 1980s we have seen the rise of ‘expert systems’: programs that, say, incorporate medical knowledge to support doctors’ decision-making. From the start, the discipline has yielded startling experiments and demonstrations that spoke to the imagination of the general public. Yet AI’s practical impact on the economy and society remained relatively minor. Until recently.

It is only in the past decade that AI’s transition from lab to society has really gathered momentum. It is now beginning to play a socially significant role, with its development being shaped not only by the research community but also by actors with their own particular interests, especially in the world of business. That was exemplified by Google’s acquisition of the British research lab DeepMind in 2014. DeepMind was responsible for the AlphaGo program mentioned above, which defeated the defending Go champion in 2016. Big technology companies see AI as an important driver of profit. Indeed, Google and Microsoft now describe themselves as ‘AI-first businesses’. Alongside these dominant technology platforms, a growing number of innovative start-ups and established businesses in other sectors are increasingly focusing on AI as well.

The number of national AI strategies shows that governments are equally interested in this technology. They see it not only as an important driver of future economic growth and a tool for improving public services, but also as a potential source of risk requiring regulation and supervision. Actors in civil society are becoming increasingly engaged, too, as they seek to defend the vulnerable, campaign for normative frameworks or test the legality of certain practices in the courts. In recent years the research community has both contributed technical expertise and entered the normative debate regarding the applications of AI.

Finally, the general public is now taking an interest in AI. Not only as a result of the intensifying discourse about various visions of the future, but also because the technology’s impact is becoming more and more tangible. Algorithms are increasingly playing a role in services people depend on, such as education, health and benefit payments. Furthermore, AI is changing the nature of many professions, requiring people to acquire new skills.

No one knows how AI will develop in the future. To a significant extent, how AI influences society will depend on how the aforementioned actors view and deal with it. They all have their own interests and values, and their own means of defending and advancing their interests. Sometimes these coincide, as when pressure groups and the media work together to support citizens who have been scammed, or when governments and companies collaborate to reinforce a nation’s earning potential. But clashes also occur. For example, there is tension between academics emphasizing openness in research and businesses protecting commercially sensitive information. Private citizens and governments can also find themselves at odds over the use of surveillance technology, where security and privacy are difficult to reconcile.

Ensuring that the use of AI is consistent with society’s core values requires cooperation, negotiation, familiarization, debate and conflict. In other words, making it part of our lives will entail a complex process of social integration. What is the best way to guide that process and to influence it where appropriate? To answer that question, two topics require further investigation: the technological nature of AI and its relationship with society.

1.3 Technology and Public Values

In this report we discuss what AI is and how the technology can be characterised. There is a vast amount of literature on the impact that AI applications have in various domains. However, to get to a cross-sectional study of the impact of AI, we need to take a step back and ask what kind of technology we are dealing with.

One of AI’s distinctive characteristics is the breadth of its applications. The academic literature refers to technologies that lend themselves to wide-ranging applications as ‘general purpose technologies’. When understood as such, AI is comparable with the steam engine, electricity and the internal combustion engine. With that in mind, this report uses analogies with earlier technologies as the basis of its reasoning. The term we have adopted to convey the nature of AI is ‘system technology’, with the word ‘system’ here referring to the many different technologies that comprise and are associated with AI, as well as its systemic impact on society.

Characterizing AI as a system technology has immediate implications for the way in which we consider its impact. Its influence is now the subject of a large body of literature,Footnote 2 as well as countless principles and charters. A recent inventory lists more than 300 sets of ethical codes and guidelines covering AI.Footnote 3 Prominent examples include those produced by the European Commission’s High-Level Expert Group on AI (AI HLEG), UNESCO and the AI Now Institute. Many publications link the technology’s impact to values such as explainability, transparency, non-discrimination, privacy, autonomy and liability. Establishing such connections is important and we therefore give them thorough consideration later in this report. At the same time, it is dangerous to seek to reduce AI’s impact to a list of public values,Footnote 4 since that is inconsistent with its dynamic entry into our society.

If AI is a system technology, as we argue in this report, then its impact on public values cannot simply be reduced to a list of effects. There are several reasons for that. First, as a system technology AI is increasingly going to be used throughout society. Moreover, since we are still in the early stages of its development, no list could be anything other than provisional. On top of that, the technology is set to impact not only the ‘AI-specific’ values mentioned above but also those central to the context in which the technology itself is applied. If AI can be used in a given context, it has the potential to influence all public values relevant to that context.

The history of system technologies teaches us that AI’s effect on society is going to be both unpredictable and wide-ranging. Trains and cars influence not only mobility but also city planning, by greatly reducing the need to live close to one’s place of work. Similarly, electrical domestic appliances have changed women’s position in society. Furthermore, expectations regarding the impact of technology can prove to be incorrect. Cars, for instance, were expected to make cities cleaner by eliminating horse manure and the associated burden of disease from the urban environment.Footnote 5

Another significant factor is that system technologies themselves help to shape values. The car enabled long-distance travel and new forms of youth culture, thus influencing values such as privacy, freedom and autonomy.Footnote 6 How AI will impact public values is therefore far from clear. The analyses now being undertaken are very important because they shed light on what is currently happening and are informing the debate as presently being conducted. The danger, however, is that if such analyses are interpreted as comprehensive, that might give rise to the misapprehension that the impact of AI can be managed just as long as the associated values are safeguarded.

Finally, it is important to recognize that the concept of ‘impact’ is itself misleading. If we view society and its core values as static, we are apt to regard AI as an external phenomenon with the potential to undermine those values – and the debate regarding AI is indeed often framed in such terms. However, from that perspective we are liable to lose sight of AI’s potential to change society for the better; for example, by promoting certain values more effectively. We should therefore adopt an approach that acknowledges the dynamic nature of AI’s social integration, characterizing its impact not in terms of external pressure but as a two-way interaction between technology and society.

1.4 A Historical Perspective

Any examination of AI’s social integration thus needs to bear in mind the breadth and unpredictability of the phenomenon, the interaction between society and technology and both the threats to and opportunities for reinforcing core values. How can such a complex investigation be undertaken in a way that supports government policy-making?

To guide our investigation, we have considered how societies have previously handled the large-scale adoption of new technologies and we have sought to identify historical patterns. In doing so, we have not assumed that history will repeat itself or that technology is deterministic. Indeed, this report highlights the differences between AI and previous system technologies. Nevertheless, we believe that interesting historical patterns may be discerned, which can help us to understand present-day issues. Adopting a long-term perspective sheds light on the dynamic nature of the social integration of system technologies.

Based on our study of system technologies, this report identifies five overarching tasks for embedding AI in society. These are broadly defined, in terms of the fundamental characteristics that shape a society– particularly one weaving AI into its fabric. By seeking to avoid too narrow a focus on specific topical issues, to the detriment of structural effects and changes, this approach addresses AI’s more intrinsic impact on society. Each task highlights a multitude of key values relevant to that impact or put on the line by it.

1.5 Overarching Tasks for the Societal Integration of AI

The five tasks are:

  1. 1.

    Demystification

  2. 2.

    Contextualization

  3. 3.

    Engagement

  4. 4.

    Regulation

  5. 5.

    Positioning

We briefly consider each of these individually. To properly understand the process of embedding AI in society, however, an insight into their interrelationships is also essential. The five tasks operate at five distinct levels and address five core questions. Demystification refers to understanding AI as technology and asks: what are we talking about? Contextualization is about applying an AI system in a particular context: How will it work? Engagement relates to the social setting of an AI system: Who should be involved? Regulation acts at the level of society as a whole, focusing on the question what rules are required? Finally, positioning is an international task: How do we relate to other countries? This breakdown is visualized in Fig. 1.2.

Fig. 1.2
A photograph of 5 tasks for A I in societal embedding are demystification, contextualization, engagement, regulation, and positioning

Five tasks for the embedding AI in society

The five tasks are universal in nature. They were relevant to previous system technologies, such as electricity and the internal combustion engine, and are equally so to the societal integration of AI. Moreover, they relate to fundamental aspects of a society, such as its public sphere (demystification), business operations (contextualization), interaction between social actors (engagement), power structures (regulation) and international relations (positioning).

Although the tasks themselves are universal, the way they are actualized depends on the type of society undertaking them. For example, every society needs to work on demystification. However, the nature and organization of the Dutch public sphere and the actors active in it differ from the situation in the USA. Consequently, demystification may involve different actors in the two countries. A similar situation occurs regarding engagement. In every country it is necessary for various population groups to engage with new technology. However, the role civil society plays will differ between, say, a democratic country such as Germany and a non-democratic one such as China. The task of positioning relates to issues such as security, which are vital to society but are expressed differently in every country.

Together, the five tasks constitute the process of integrating AI within society. In that context they serve as vehicles for considering matters of vital social importance like open debate, stakeholder representation, government regulation, national security and national prosperity. Although this report examines them individually, it is important to emphasize that, in practice, they are often closely related. So, they should not be considered as self-contained or sequential, but as interconnected elements of a larger whole.

By adopting a societal task-based approach, the WRR aspires to advance the public debate regarding AI. When it began nearly 10 years ago, that was characterized by grand expectations of the future. Visionary authors predicted a world of self-driving cars, free from the threat of disease, where algorithms relieved people of many onerous tasks. Others, however, warned of a dystopian future in which humans were subservient to machines.

In recent years the nature and tone of the debate have changed. AI applications have now been widely implemented, shifting the focus from future scenarios to acute topical issues. For example, it became clear that HRM algorithms were disadvantaging women while the algorithms used by security services discriminated against people of colour. Government organizations all over the world appeared to be relying on algorithms they were barely able to understand or justify. As a result, the tone of the debate has become largely negative. That should not come as a surprise. As indicated earlier, although AI is now entering society, the process of its integration is only just beginning. The current situation can be compared with the time when cars were first appearing on our streets – before seatbelts, airbags, insurance, number plates, traffic regulations or driving tests – or the early days of mass-produced food and medicine, when there were no safety standards, patient information leaflets, product approval schemes or regulators. In other words, we are currently in a phase where a lot is bound to go wrong and malpractices are sure to occur, mostly due to a lack of experience or clear rules. Despite these clear risks, though, there is a danger that all the negative media coverage will cause us to lose sight of AI’s potential to make a positive contribution to society. It may also cause us to become so preoccupied with the short-term risks that we fail to recognize or address the greater threats we face.

It is therefore important to move the AI debate forward and to assess the technology’s impact on a structural basis. That implies that we should not only concern ourselves with acute issues and problems but also with developing a balanced vision of AI’s long-term integration into society. The five tasks identified above are pivotal in that regard. So, what exactly do they involve?

1.6 The Five Tasks

The first task is demystifying AI. Central to that challenge is the general public. AI has many myths attached to it, which not only distort perceptions of the technology but also sustain unrealistic expectations and disproportionate fears. For example, despite the impression given by certain companies and visionaries, the wait for self-driving cars has dragged on for years. The unrealistic nature of the predictions soon becomes apparent once one truly understands the huge challenges facing AI in this field. Concerns that malevolent AI might take over the world are equally unrealistic. Hence, demystification depends on an informed perception of what AI is and is not capable of, now and in the future. In short, what are we talking about here? We will see that myths exist about the way AI works, about its likely future impact and about digital technologies in general.

The second task is to contextualize AI. This is a challenge for all actors involved in deploying and pursuing its functionality in particular domains. In other words, everyone concerned with the question: How will the technology work? Such actors include both private enterprises and public bodies. Contextualization first of all relates to the technical ecosystem. System technologies can function properly only if sufficient attention is paid to supporting technologies. Just as the internal combustion engine depended on the steel industry, so AI algorithms depend on data, hardware and other forms of technological support. This ecosystem also includes emergent technologies; other advances appearing at the same time, which can interact with and reinforce AI – and vice versa. For example, the Internet of Things, blockchain and quantum computing. Contextualization has a non-technical social dimension as well, involving developments such as the incorporation of new technology into business processes. Moreover, new technologies that perform well in a lab do not necessarily flourish in practice. Adapting the processes, developing business models and educating people all take time. Practice and technology need to adapt to one another.

Furthermore, societal integration requires the engagement of stakeholders. The central question here is: Who should be involved? As the use of AI increases, after all, so more members of society are affected by it and have a legitimate interest in its deployment. While civil society is at the heart of the debate regarding how AI is used, individual researchers or businesses can also become involved.

It is very important to engage such actors, especially in the early phases of a technology’s development when its effects are difficult to anticipate. During this period, civil society can contribute towards agenda-setting and can highlight problems – for example, by flagging malpractices and drawing attention to victims, as with the fatalities linked to self-driving cars and the issue of algorithmic ethnic profiling. Engaged stakeholders can speak for the socially disadvantaged, and for excluded individuals and groups. Journalists, including data journalists, play a role as well. Furthermore, social protests have often led to better and safer technologies. Other significant actors include scientists and technical experts, people working for technology companies and professionals whose work is influenced by AI.

Fourthly, the societal integration of AI requires regulation. When it comes to this task, national and international government organizations are key players. Broadly speaking, the dilemma here is that although technologies are reasonably easy to regulate in their early stages by applying existing rules, their positive and negative effects are not fully understood until they reach greater maturity in their development. By the time it becomes clear where regulation is required, though, corrections can be difficult to realize because of earlier decisions and established power structures. This dilemma is significant because the introduction of system technologies is associated historically with the rise of companies exercising monopolistic power and other forms of undue control. Such structures need to be challenged steadfastly to preserve democratically legitimized decision-making in respect of public values. Answering the question: ‘What rules are required?’ requires first and foremost that we have a clear picture of the instruments needed and the adequacy of existing regulations. In the context of the regulation task, it is also important to address not only acute issues but also long-term developments that could jeopardize the societal integration of AI, such as mass surveillance and growing dependence on private digital service providers.

The fifth and final task we have identified is positioning. The question here is: How do we relate to other countries? This can be divided into two related issues. The first concerns our national earning potential. For a country to remain prosperous and innovative, it is necessary to examine its AI capabilities and AI-related policies. The following questions are relevant in this context: Is there a global AI race? What domains should we focus on as a nation? Should we develop a form of “AI diplomacy” to further our national interests? The second issue relevant to positioning is security. Where this is concerned, the threat posed by autonomous weapons is often the focal point. In reality, AI raises far wider security issues – and not just in the military domain: it also has major security implications for civil society. Consider the intensifying information war being waged online, for example, or the export of civil technologies that lend themselves to authoritarian uses, such as smart cameras. Although earning potential and security might appear to be separate issues, it is important to recognize that they are increasingly intertwined at the international (geo-economic) level. That has implications for a country’s positioning.

1.7 Structure of the Report

In this report the WRR makes various policy recommendations linked to the five tasks defined above. AI and its social integration are complex, wide-ranging topics that require considerable explanation. This report is therefore a sizeable document. To improve its readability, we have divided it into three parts. Part I sets out the main historical and conceptual elements of our research, Part II is devoted to the societal tasks and Part III presents the WRR’s conclusions and recommendations. Readers wanting to know more about AI are directed to Part I, those interested mainly in the challenges associated with its integration into society to Part II. To put those challenges into their proper context, however, it is important first to read the sections in Part I on the definition of AI and its interpretation as a system technology. Anyone simply wanting to know how the WRR recommends that the government should embed AI in society can go straight to Part III. To help readers maintain an overview, each chapter ends with a summary of its key points.

Part I comprises three chapters explaining the basis of our research into the societal integration of AI. Chapter 2 introduces the theme from first principles: what is AI, how can the technology be defined and what choices need to be made? After considering those questions, we outline the historical development of artificial intelligence. We begin with early depictions of the theme, then follow a path from the first laboratory in 1956 through the various subsequent technological ‘waves’. Chapter 3 deals with recent AI-related developments and describes how, over the past few years, the technology has moved out of the lab and entered society at large. We consider its main fields of application, recent research and how AI has become a topic of public debate. In Chap. 4 we clarify what type of technology AI is. To that end we look at various categories of technology identified in the literature and consider how they relate to AI. This leads us to the conclusion that it is a system technology, so we then we examine the historical integration of system technologies into societies and identify five tasks associated with that process.

In Part II we look more closely at those five tasks: demystification, contextualization, engagement, regulation and positioning. Chapters 5, 6, 7, 8, and 9 are devoted to each of these in turn. They thus form the core of our analysis, discussing what each task means for AI and what actors are involved.

Finally, in Part III we consider the implications of our analysis for government policy. Chapter 10 delivers our primary message and links the five tasks to our recommendations: two in respect of each task, with accompanying concrete action points. At the end of Part III we make one final recommendation regarding the wider institutional integration of the five tasks. This report was written for the Dutch government and the practical implications of our recommendations are specific to the Netherlands. However, the recommendations themselves are universal. We therefore believe that they can be relevant to and inspire policies in other countries as well.