Artificial intelligence is not just another technology – it is a ‘system technology’ that will fundamentally change our society. That is the key message of this report. Government and society therefore need to be much more aware of and actively involved in AI’s integration into daily life. The government in particular needs to focus on five overarching tasks to help shape the integration process, because only then will it be able to continue to protect the civic values affected by AI. Such a challenge demands a policy infrastructure that reflects both a political and an administrative commitment.

AI is to our century what electricity was to the nineteenth and the internal combustion engine to the twentieth. It is not a concrete technology that can be overseen and managed by a group of experts or policymakers from one or more ministries. AI is everywhere, it is continuously being improved and it generates complementary innovations, which makes it a very versatile but also unpredictable phenomenon. But unpredictability and uncertainty about how to integrate or embed AI into society cannot be used as an excuse to sit back and watch it take its course. Rather, the potentially unlimited value that AI could deliver calls for a most carefully considered approach to this process. Government must also consider the broader agenda in this respect so that it can continue to intervene in and adjust the process in the future.

By examining AI through the lens of previous system technologies, we can learn a great deal about how system technologies are embedded. The lessons that we learned from embedding earlier system technologies form the basis for the recommendations that the WRR presents in this final chapter. The key point is that considering AI as a system technology has implications for the way we look at public values. History teaches us, as we have argued in Chap. 2, that the impacts of system technologies on public values cannot simply be categorised on a list. After all, given that AI has the potential to be applied throughout our entire society and we are currently only at the beginning of its development, the impact of AI on public values will not only be broad but also unpredictable. In the previous chapters and the final analysis of this chapter, we therefore approach public values in a way that agrees with the dynamic nature of AI.

1 Five Tasks as Lessons from the Past

From an analysis of the history of previous system technologies, we have distinguished five overarching tasks for the integration of AI into government and society: demystification of what it is and can do, contextualization of its development and application, engagement by various parties, regulation of the technology, its use and the social implications and, finally, its national positioning in relation to other countries and international organizations (Fig. 10.1). We have discussed these tasks in detail in Part 2 of this report and recap them briefly here, also indicating what civic values are at stake and what risks are involved if we do not face up to these tasks.

Fig. 10.1
An illustration of the 5 tasks of A I in societal embedding are demystification, contextualization, engagement, regulation, and positioning.

Five tasks for the social integration of AI

AI as a System Technology

There is a rich body of academic literature discussing technological revolutions, epochal innovations and technical eras. A recurring central concept in this corpus is that of ‘general purpose technologies’, those not used for a specific purpose but applicable broadly throughout society. Examples include the steam engine, electricity, the combustion engine and the computer. In Chap. 4 we revealed how AI has the three characteristics of a general purpose technology: it is (1) ubiquitous, (2) subject to continuous technical improvement and (3) enables complementary innovations in other fields.

In this report we have labelled AI a system technology. On the one hand this points to the fact that – like electricity and combustion engines – it is part of a wider system of other technologies, while on the other we use this term to emphasize the systemic effect such technologies have on society.

  • What Do We Mean by AI?

In this report we have adopted the definition formulated by the High-Level Expert Group on AI (AI HLEG) of the European Commission: “systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals.”

The broadest definition of AI equates it with the use of algorithms, while the strictest views it as the imitation of all human skills (‘artificial general intelligence’). The former stretches the concept of AI enormously while the latter defines it out of existence. The AI HLEG version is sufficiently specific, while at the same time – by admitting non-concrete phenomena such as deep learning –leaving room for new techniques and developments.

  • AI and Digital Technology

AI is strongly intertwined with other digital technologies such as computing and data but does not coincide with them. One of the fathers of computing, Alan Turing, was also the inventor of the so-called Turing test, which is used to assess AI systems. AI is dependent on huge amounts of data, and internet data. Current deep learning methods require large amounts of digital information to work effectively. At the same time AI cannot be synonymized with these other technologies. We have outlined its development and the many ways it is linked to computers, data and the internet in Chaps. 2 and 3. But AI also has a separate scientific and historical background, with its own ‘springs and ‘winters’. While computers have been widely used since the Second World War, and the internet has been ubiquitous since the 1990s, the emergence of AI as a social phenomenon is a far more recent development. This is why it deserves individual attention.

1.1 Task 1: Demystification

The first overarching task – demystification – concerns preconceptions about AI as a technology. In fact, it is really about the question: What is AI? System technologies always go hand in hand with extreme preconceptions. Excessively high expectations lead to disillusionment and ill-considered applications, while exaggerated fears lead to rejection of a technology and unexploited opportunities. Clinging to such preconceptions will have a negative effect, particularly in the longer term. We argue that more realism is needed to be able to ask the right questions about societal integration and civic values. In the past we saw all manner of unrealistic expectations arise concerning the future of electricity and automobiles, driven by public demonstrations and races. Commentators thought that trains, the telegraph and later the internet would bring global peace by connecting the world. Conversely, the imagery surrounding earlier system technologies in the form of Frankenstein’s monster and words like ‘electrocution’ – which linked electricity to mortality – stirred up fears of these breakthroughs.

There are also numerous myths surrounding artificial intelligence. AI systems are said to be rational and objective, but also to work like an unfathomable ‘black box’. It is thought that the technology could eventually match and even exceed all human capabilities, and even turn against humanity. In addition, there are all sorts of myths associated with digitalization in a broader sense, such as the idea – popular until quite recently – that the development of the internet should be ungoverned and, more importantly, unregulated. Another mistaken preconception is that there is no alternative to the current form of digital technology and that digitalization offers a solution to every problem.

If we do not address such ideas, society may come to rely too heavily on AI systems – with all manner of unwelcome consequences. They could also lead to AI being rejected and its benefits being reaped insufficiently, if at all. Finally, exaggerated preconceptions can prevent an open discussion on crucial questions surrounding the societal integration of a technology. Demystification primarily involves issues such as legal protection, the public’s confidence in the technology, adequate provision of information and the quality of the public debate.

1.2 Task 2: Contextualization

The second task we have distinguished is contextualization. This concerns the application of AI and the question: How will the technology work? In other words, contextualization relates primarily to the technical ecosystem. System technologies do not function independently; they are dependent on other supporting technologies or their underlying facilities. An example is the car’s dependence on the oil industry, petrol stations and a road network. Moreover, system technologies become connected over time to other emerging technologies, as the car is connected to electronics. In addition to the technical ecosystem, contextualization is also about the role of the social ecosystem. At a macro level, a lasting effort will be needed to adapt work processes, value chains and knowledge development. Only after this has been done will organizations be in a position to use the technology effectively and become more productive. At the micro level this will require behavioural change and effective interaction between the users and the new technology.

AI also requires various supporting technologies or facilities, such as data, telecommunication networks, chips and supercomputers. Furthermore, we are already seeing increasing connectivity between AI and other new technologies such as 5G networks, the ‘Internet of Things’ and quantum computing. As far as macro level developments are concerned, the expectation that AI will make human work redundant on a massive scale appears unfounded. Rather, a process of intensive training and practice will be required to make it an effective tool in the workplace. At the micro level the task is to achieve effective human-machine interaction. Here the relative autonomy of AI systems forms the main challenge.

Insufficient attention to supporting technologies and facilities (such as good quality, secure and readily available data and networks) will lead to poorly functioning AI systems, underutilization of opportunities or stagnation of development. Just as the road network was essential for the use of the car, so AI requires technical adaptations to the ecosystem. Attention to that aspect is particularly important in those areas where a country can benefit most from AI. For the Netherlands this means areas in which the country has traditionally had a strong international position (such as agriculture and services) and areas where AI can help address existing challenges (such as those in healthcare). Other countries will obviously make other choices, such as manufacturing in the case of Germany or defence in the case of France. Insufficient attention to the social ecosystem will also lead to poor implementation and to all manner of issues, or even rejection of the technology if the users of AI systems are not adequately equipped to deal with these issues. So not only are the quality and safety of AI applications at stake, but also the public benefits that can be gained in areas ranging from wider access to better quality healthcare and education to better government services.

1.3 Task 3: Engagement

The third overarching task, engagement, concerns the societal environment of AI and the question: Who should be involved? When new system technologies arise, large companies and governments have the means and interests to be early adopters. Civil society parties usually do not become involved until later. As such these new technologies initially only reinforce the existing balance of power in society. Consider how the deployment of the steam engine in factory production processes marginalized workers or how adapting the infrastructure for the automobile forced non-drivers (at that time mainly poorer people) off the roads.

Stakeholders’ engagement in society can take a wide range of forms. At one extreme is violent resistance, while non-violent protests and calls for bans are also ways of restricting a new technology. At the other end of the spectrum, civil society can play its part in improving a technology – for example, by contributing its own expertise or by applying it in its own practices.

AI in its current manifestations also reinforces existing imbalances. Less affluent citizens, ethnic minorities and women are among the groups discriminated against by algorithms. Civil society is now mobilizing to protest against a number of controversial applications, such as autonomous weapons, facial recognition and the use of AI by the police. Much of this opposition takes the form of protests and calls for bans, but strikes are on the table too. But when it comes to more co-operative forms of engagement aimed at the useful social integration of AI – such as contributing expertise or using the technology to tackle challenges related to climate change, poverty, or human rights –much still remains to be gained.

What will happen if engagement lags behind? It is likely that existing imbalances will be reinforced and the balance of power between governments and large companies on the one hand and citizens on the other further distorted. In particular, the rights of various weaker social parties will be threatened. So, if there is not enough engagement in AI, fundamental rights such as equality, privacy, non-discrimination and autonomy as well as democratic principles like participation, inclusion and pluralism will all be at stake. A regulatory framework is an important prerequisite for shaping this engagement, which brings us to the fourth task for government.

1.4 Task 4: Regulation

The task of regulation is relevant at the societal level, focusing on the question: What frameworks are required? When a new technology leaves the lab, it is initially difficult to oversee, adapt or develop the necessary frameworks. Much is still unclear about its nature and effects, and so as long as AI is not yet embedded across the full breadth and numerous contexts of society it is difficult to know what specific civic values it might compromise.

In the early phase, technology companies often promote self-regulation by the sector or argue that users themselves can be relied upon to safeguard certain values. Gradually, however, structural issues come to light that require a more active government role. Other system technologies were initially concentrated in the hands of a few companies, such as (in the US) GE and Westinghouse in the case of electricity or the ‘big three’ in Detroit when it came to automobiles. But other factors also contribute to the need for a more active government role. As technology becomes more deeply embedded in society, it increasingly touches upon civic values that fall under the responsibility of government. With time the broader social effects of a new technology become clearer, and so policy and legislation become less and less tentative. From this point government needs to develop a broader and more unified legislative agenda; separate dossiers no longer suffice.

With AI we saw an initial focus on self-regulation. Today the momentum has shifted towards more active government intervention (the European draft AI Act is a good example of this). At the same time structural issues are coming to light, which government will also have to address if it wants to manage the effects of the technology. These include the concentration of power in the hands of large companies, the growth of surveillance in society and increasing public-sector dependence on commercial businesses.

Of course, there are no panaceas or ‘silver bullets’ for the regulation of system technologies. Properly embedding a technology in society requires a broad set of measures developed over a long period of time. An example is the internal combustion engine that made the motor car possible: seat belts, insurance, number plates, airbags, driving tests, traffic rules and road signs were all steps that contributed towards its social integration – a process that continues to this day because the car and its environment are being developed continuously. It was impossible to foresee that all these measures would be necessary when the car was first introduced. However, this does not mean that the legislator can endlessly vacillate about what the best approach might be. The task of regulation requires both a greater role for government and a broader legislative agenda. If government waits too long to develop its agenda, lawmaking will be left behind by the dynamism of the process. Meanwhile, other stakeholders will have taken control of the way AI is embedded to the extent that it will be almost impossible to reverse this development. Existing frameworks then lose their legitimacy and our social system based on shared civic values will come under threat.

1.5 Task 5: Positioning

The final overarching task we have identified is positioning. This relates to the international arena and is about the question ‘what is our international position?’ Firstly, this concerns the role that a new system technology can play in boosting national competitiveness. In the past technologies like the steam engine, electricity and the internal combustion engine helped many countries strengthen their competitive position in the international arena. They even influenced the nature and outcomes of international conflicts; railways were essential to Prussia’s victory over France in 1870–1871 and the first computer code-breakers contributed towards vanquishing the Germans in the Second World War. These two dynamics feed the idea of a global race to dominate a new technology and some countries even try to develop and maintain such innovations completely within their own borders. However, history teaches us that system technologies always have a global character, and that international co-operation is in fact the best way to improve individual countries’ competitiveness and security.

The same dynamic is involved in AI. There is much talk of an ‘AI race’, with the US and China setting the pace. Many countries have therefore developed AI strategies in recent years in order to join this race and to deploy AI to strengthen their competitiveness. But there is also a growing awareness of its impact in the areas of conflict and security. The most prominent application is so-called autonomous weapons. Several international initiatives have now been launched to control the development and extent of this new arsenal. But there are many other military and civilian applications of AI that can threaten national security.

If countries fail to develop their position in AI and pay too little attention to broader co-operation at the international level, they will miss out on opportunities to strengthen their competitiveness. Moreover, not enough consideration of their international position in AI will leave countries insufficiently aware of and prepared for the security risks the technology brings.

1.6 Five Tasks, Five Transitions

These five overarching tasks are thus critical to AI’s successful integration into society. But it is also important to emphasize their interrelations. Demystification, for example, strengthens society’s ability to engage with AI technology. So, although these tasks can be separated analytically, in practice a combined approach is needed. The stakes involved in integrating AI successfully are high (utilizing innovation potential, societal acceptance, etc.), and the process puts various civic values at risk – although it is impossible to predict in advance which will be affected, or how. We have argued elsewhere in this report that it is impractical to draw up an exhaustive list of civic values and analyse them all in the light of AI. The unpredictable nature of system technologies necessitates a more dynamic perspective. We therefore suggest that the debate on AI and its consequences for society be conducted on the basis of the five identified tasks. Many contemporary and future issues can be addressed within this broad framework.

With this cluster of five overarching tasks, we thus offer a long-term framework for AI’s societal integration. This, however, does not answer the question of what needs to be done in the short term in the light of these tasks, particularly from the government point of view. In other words, what transitions are involved? Below we describe the transition associated with each task (Fig. 10.2) and then, in the next section, explain each transition with the help of concrete recommendations. The transitions are:

  1. 1.

    From fiction to facts;

  2. 2.

    From abstraction to application;

  3. 3.

    From monologue to dialogue;

  4. 4.

    From reaction to action; and,

  5. 5.

    From nation to network.

Fig. 10.2
An illustration of the transition of 5 tasks of embedding A I into society. From photo to understanding in demystification, from technology to use in Contextualization, from monologue to dialogue in engagement, from reaction to direction in regulation, and from nation to network in positioning.

Every task requires a transition

1.7 A Broad Agenda for AI

The five transitions represent an AI agenda for the years ahead. Our first observation in this respect is that the breadth of this agenda implies that national governments cannot be solely responsible for its implementation. Across all five tasks a variety of actors in society have a role to play and responsibility to take. For example, academics will be needed in the transition from fiction to a more facts-based approach of AI. Ordinary citizens can help shape this transition too, by informing themselves about AI or by following the ‘national AI course’. The media also have an important role to play in informing people who are unmotivated or unable to find out more for themselves. Meanwhile, much of the transition from abstraction to its application will fall to industry. Government bodies may later become major users of AI, but initially all manner of private-sector players will need to answer the question of how it can be used in practice. In short, all the tasks and the associated transitions will require a collective effort by various actors.

A second observation is that not all steps towards achieving the tasks will require the same effort. In fact, some things will happen automatically. As society collectively gains more experience with AI, for example, we can expect a degree of demystification and thus a more realistic awareness of its implications. Moreover, initiatives are already emerging in some areas. These include autonomous weapons and their effect on countries’ international positions, which are receiving attention around the world. When it comes to regulation, not every new application of AI will require brand new legislation. Existing rules already provide the necessary framework for a variety of applications, and in some cases self-regulation by companies or other societal parties will suffice – for the time being, in any case.

In this report that was originally written for the Dutch government, we are therefore selective in the tasks we highlight: our recommendations concern only those areas in which the WRR believes the Dutch government should take more initiative. However, these recommendations may also apply to other governments. For each recommendation we suggest a number of concrete actions. We end by describing how these recommendations can be supported both institutionally and politically.

2 Transition 1: From Fiction to Facts

The task of demystification involves a transition from fiction to facts. This means that the current dominance of far-reaching preconceptions with utopian and dystopian outcomes must give way to a more rational understanding of the facts. In short, we need a more balanced picture of AI. The transition we are advocating here does not mean that government has to start telling society ‘the truth’ about AI. It does need to make learning about AI an integral part of its public function, however, and so evaluating AI and reflecting on its goals will likewise need to become central in that function. This also means that government will need to respond critically to parties with overoptimistic expectations, and likewise to those that only see risks. Our first recommendation, therefore, is to bring about this transition within government itself.

Recommendation 1

Make learning about AI and its potential applications an explicit goal of government’s public function.

Two reflexes are typically observed when government uses a new technology. On the one hand there is ‘technosolutionism’. A recent example of this in the Netherlands was the ‘coronavirus tracing’ app. Its introduction was announced early in the pandemic as an important part of the government’s response to COVID-19, but the stance adopted by the authorities automatically stifled discussion on its usefulness. No-one asked what the app actually contributed towards the response or whether – based on expert knowledge or the requirements of doctors and community health services – other, non-technical solutions would be preferable. Development of the tracing app was given high priority, but the stakeholders underestimated how long that would take. The outcome of a competition to build the app was that none of the entrants met the conditions set, and this was both a disappointment for the app’s backers and a confirmation for those who had expressed misgivings.

The other side of the coin is a ‘technophobic’ reflex fuelled by failed or banned projects. Serious consequences have ensued recently from the inappropriate use of data by the Dutch government in two projects: the payment of childcare allowance by the tax authorities and the fight against fraud by local authorities using the System Risk Indicator (SyRI). The general public is now aware, albeit within a certain frame, that the government uses algorithms and that this potentially has negative consequences: privacy is undermined, and by extension fundamental rights are violated. As a result, the Dutch government has become more hesitant to use algorithms in general and AI in particular.Footnote 1

Neither reflex is productive. Technosolutionism leads to sky-high expectations, whereby failure can lead to disappointment. Of late, however, heightened risk awareness in the Netherlands appears to be triggering the technophobic reflex. The result is that government is missing opportunities to improve existing practices. It is inevitable that mistakes will be made, but that does not mean that government should abandon the technology altogether. So how can the right balance be found?

As an emerging system technology, AI faces a lengthy process of use, practice and adaptation. It is not a simple tool or a magic wand that can be purchased and then left to perform its tricks. This is why learning must be an explicit goal of AI policy – more so than it is today. It also means that policy must take account of potential errors (without detriment to civic values). More explicit attention to learning will also allow executive agencies to experiment without immediately being held accountable in the political arena. For this to happen, such experiments also need to be recognized and supported by the industry at board level.

To build capacity in AI through learning, the WRR believes that government should first focus on attracting talent and training staff. AI will become a core component of organizations’ primary processes. Technical and non-technical staff must be able to communicate at the same level to ask the right questions. Learning also implies organizing basic administrative tasks such as the timely and diligent archiving of information generated by AI processes, the transfer of knowledge to new staff members and access to databases and algorithms. On the latter point we must emphasize that the government must reach adequate contractual agreements with private IT suppliers regarding access to data, algorithms and other relevant AI information. This also requires government’s own knowledge of AI to be up to scratch.

Such goals must be agreed explicitly before a large AI project is undertaken, not treated as incidental administrative burdens. So, this approach will also have implications for the way government works. Today it is common to allocate substantial budgets to large IT projects with fixed delivery dates. With AI, however, a more iterative process involving smaller projects is preferable. The required capabilities can be built through learning and evaluation, after which the projects can be scaled up.

Moreover, it is not only the government’s executive agencies that will benefit from a learning approach based on progressive insights into AI. So too can political, legislative, supervisory and legal actors. How this might work in practice is illustrated by an example from the Dutch Council of State, the nation’s supreme court: after having formulated a transparency assessment framework for the valuation of real estate it later issued a second ruling further clarifying the requirements, inspired in part by what had been learnt from the first.

Concrete Actions for Recommendation 1

  • Work on building knowledge and capacity and on preventing dependency.

  • Start with smaller ambitions and projects and then scale up.

  • Explicitly allow room for mistakes and work with short evaluation cycles.

Wider society will also benefit from demystification and thus form a more realistic understanding of AI. This requires more than just knowledge, though; practical skills and an understanding of how to implement AI in different contexts are also needed. By analogy with the widely used term ‘media literacy’, in this context we refer to ‘AI literacy’. Developing this skill is essential to enable society to adopt a realistic approach to AI and the changes it brings. Clarity about the facts of AI and its use is an important prerequisite here. Our second recommendation follows on from that.

Recommendation 2

Stimulate the development of ‘AI literacy’ amongst the general public, beginning with the establishment of algorithm registers.

Various actors have a role to play in the process of demystification. Journalists, academics and industry can all contribute towards the genesis of myths or help debunk them. Some degree of demystification will therefore occur automatically over time, without the need for state intervention. A basic level of AI literacy will eventually help citizens be more critical of overly optimistic, overly pessimistic or simply false representations, although these can probably never be eliminated altogether. That said, a number of ongoing developments could require government to play a greater role.

Much of the media coverage of AI is sensationalistic. There is plenty of speculation about systems that will supplant people and disrupt society. This creates a need for more facts about what AI systems actually can and cannot do. Many of the reports also feed fears of various kinds, as described in Chap. 5. Finally, AI is becoming more and more associated with applications for surveillance and control, which puts the technology in a poor light.

To encourage more realistic perceptions and a better understanding of AI, a first step for government is to be more transparent about its own use of the technology. It can do this by establishing algorithm registers (see Box 10.1). In the Netherlands, the City of Amsterdam has already started such a register to provide citizens with details of where and how it uses algorithms. Utrecht and Rotterdam have now copied this initiative. In its progress statement entitled ‘AI and algorithms’ of 10 June 2021, the national government announced that it was to investigate “how an algorithm register could contribute towards increasing transparency on the use of algorithms by the government”.Footnote 2 Three months later, on 6 September 2021, the government submitted its Dutch Digitalization Strategy (I-Strategie Rijk) for 2021–2025 to parliament. This states that the creation of an algorithm register is one of the ambitions all ministerial chief information officers and their executive organizations intend to work on in the coming years.Footnote 3

Box 10.1: Algorithm Registers

Calls for the creation of algorithm registers are increasing. On 19 January 2021 the Dutch parliament adopted a motion proposing the establishment of such a register to keep track of the algorithms the government uses, their objectives and the data they draw upon (TK 2020–2021, 33510: 16). The motion was prompted by the parliamentary inquiry into the childcare allowances scandal.

Some cities have already launched algorithm registers of their own. In the Netherlands they include Amsterdam, and elsewhere Helsinki, Finland. The Amsterdam register describes algorithms for automated parking enforcement, for processing public nuisance reports, for actions against illegal subletting and for crowd monitoring. In each case the register reveals what data was used to train the algorithm, how it is deployed, how officials use its output and how distortions (bias) and risks are dealt with.

Algorithm registers are also being considered further afield, as revealed in a report from the Law Society of England and Wales, the British professional body for lawyers. This organization advocates a register for algorithms in criminal law, in each case recording key details such as its transparency, standard operation and data use. The European Commission seems to be anticipating the introduction of an algorithm register, too: its proposed AI Act requires the registration of ‘high-risk’ AI systems, including those for private use.

Helping citizens understand the different types of AI applications being explored or used by government is a necessary next step. In our view, however, the creation of algorithm registers will only bring real added value if it also encourages conversation about AI use among both those already using the applications and those who will be affected by them in the future. The success of such registers will depend very much on the quality of the information provided, society’s capacity to use this information and the response of the responsible actors to any problems identified. The registers should therefore be reviewed periodically.

In addition, we advise government to be particularly aware of its own role in shaping public perceptions of AI. The way government uses the technology – and advertises that use – is bound to play a part in defining how ordinary people view AI and their emotional response to it. The government should start by investing more in AI applications that change society for the better or help tackle the major challenges it faces (such as reducing climate change and combating social inequality). In this report we have discussed the many ways AI can be put to good use – for example, to create healthier air, reduce energy consumption, improve diagnostic procedures, enhance medical assistance and ensure better animal welfare. We call upon government to encourage and facilitate more such applications of AI and to advertise their merits.

The titles of these projects should also be given more consideration. The Dutch anti-fraud system SyRI was initially named ‘Black Box’, which may unintentionally have contributed towards the preconception that ‘AI must be incomprehensible for humans’. Terms like ‘killer robots’ for autonomous weapon systems and ‘robot judges’ for AI in law also influence people’s perceptions. Designations of this kind evoke strong associations. Sometimes that is the intention – to paint a clearer picture, for example, or to focus a discussion. Such expressive terms can distract from the issues that really matter, though, so government must not underestimate the power of the words it uses.

Through public information and educational campaigns, government can help build a basic knowledge of and familiarity with AI as well as making people aware of its possible pitfalls. But it is essential that these activities are not limited to the classroom and the workplace, as was the case with the Dutch 2019 AI Action Plan. That could suggest that AI is something we only need to worry about later, or that it will only affect people in certain occupations, when in fact its potentially wide-ranging application means that all of society will need at least a basic understanding of this technology. It is important first of all that interested citizens, those who want to know, be informed about the use of AI applications. The algorithm register and, even more crucially, the discussion on the use of AI advocated by the WRR will only deliver added value if the whole of society has a basic understanding of the technology. Providing realistic information and education to improve the public’s understanding of AI can increase confidence in it (see also Box 10.2). Particular attention should be paid to finding effective ways to impart this basic know-how to citizens who are less self-reliant.

Box 10.2: Current Programmes to Encourage AI Literacy

There are already several initiatives for citizens of the Netherlands interested in AI. The National AI Course is a good example that should be encouraged, as is the Dutch version of the Elements of AI course launched by NL AIC and Delft University of Technology. People looking for information on the use of AI can find it on government websites such as the Ministry of the Interior’s knowledge database (Kennisbank). To complement these sources, a website should be created with an overview of the information people need to be aware of if they want to use AI or are confronted by it. This could be similar to the existing sites for homebuyers, for example, or consumer watchdog sites.

More than providing technical knowledge, developing AI literacy involves building the public’s knowledge so they can put news reports about AI and its applications into perspective and develop a realistic idea of its potential and limitations. This can be seen in the same light as ‘media literacy’, which involves the competencies needed to participate in a media-dominated society.

Concrete Actions for Recommendation 2

  • Establish a government algorithm register for AI applications, initiate the conversation about the use of AI and ensure periodic evaluations.

  • Critically evaluate government’s own contribution in shaping public perceptions of AI.

  • Give greater priority to AI applications that benefit society and draw attention to them.

  • Contribute actively to public information and educational campaigns about AI.

3 Transition 2: From Abstraction to Application

The transition required for the task of contextualization involves the step from AI as an abstraction to its application. By ‘abstraction’ we here mean AI as a technology confined to the intellectual domain of research labs and academic reflection, remote from ‘real-life’ contexts. There is currently a lot of focus on the fundamental characteristics of AI systems and on related issues of transparency and explainability. Broadening this to include their practical application means paying more attention to the contexts in which the technology is used, in particular the technical requirements of the relevant ecosystem and the way users interact with AI.

Regarding this transition, first and foremost we examine the broader technical ecosystem that AI forms part of. We have formulated the following recommendation for the Dutch government.

Recommendation 3

Explicitly choose to develop a national AI identity, then investigate what adjustments this requires to the technical ecosystem in relevant domains.

Our ecosystem approach reveals that a lot of technology is needed for a well-functioning AI system. Permanent attention must be paid to talent development, research into algorithms, network quality, access to chips, building databases, developing cross-sectoral standards and building a secure ecosystem for sharing data and datasets. It is also important to keep tabs on emergent technologies that can give AI a boost. The government has already launched initiatives in several of these areas, including the Growth Fund and the Intergovernmental Data Strategy.Footnote 4

The WRR recommends focusing on one specific additional point, namely the technical adaptations required to facilitate the AI environment. We use the term ‘enveloping’ to describe how an environment is modified to allow a technology to function effectively within it, analogous with the construction of the road network to facilitate the motor car or the power grid for electrical appliances. These adjustments often cannot be left to the market alone. Moreover, the choices made may have far-reaching consequences for society. This means that government must be actively involved. Just as the development of the car in the twentieth century required the creation of a mobility infrastructure tailored to motor vehicles, so enveloping for AI means developing an environment ‘readable’ for that technology. AI systems need be able to analyse their surroundings to interact with them intelligently (see also Box 10.3).

Box 10.3: Examples of Enveloping

Take autonomous vehicles. These days they have more and more intelligence built in, but are still far from being able to move completely independently in a complex environment. Adjustments to road surfaces and markings, or even the construction of specific infrastructure reserved solely for these vehicles (as with the motorway for the traditional car), are all big steps forward in the use of the technology. This does not necessarily mean adding new lanes to roads, but could instead take the form of special signs and signals or specific zones, such as industrial estates and other controlled environments, where experiments can be carried out safely.

The same applies to all manner of home automation systems, and also to complex industrial robots, which today are still ill-equipped to deal with the complexity and unpredictability of human behaviour. Experimenting with and investing in environments that are more easily readable for these technologies could make an important contribution towards their effective functioning and hence their usefulness.

It is impossible for the Dutch government to support every effort to make the domains affected by AI more readable for the technology. Of necessity, therefore, it must focus on a number of specific areas. The WRR thus advocates developing what we call the ‘Dutch AI identity’. This encompasses those domains on which our nation wants to focus in the development and deployment of AI. Within these parameters we as a country cannot risk failing to implement the necessary changes, whether because of co-ordination problems or other reasons, which is why this transition cannot and should not be left to the market alone.

This national AI identity could include those domains in which the Netherlands is traditionally strong or ones that are important drivers of the Dutch economy, such as certain segments of agriculture, horticulture, infrastructure and logistics. Developing AI here will help prevent Dutch industry losing market share or becoming too dependent on foreign suppliers, while at the same time it should generate new revenue models. In addition, the Dutch AI identity could include domains that embody important civic values and where the government has a specific responsibility to take a lead, like healthcare or effective governance. The so-called AI Coalition is already compiling plans to stimulate AI innovation in various sectors of the Dutch economy. By formulating a national AI identity, the government could help steer this process. One example of where such guidance is needed is agriculture, in certain segments of which a limited number of suppliers currently dominate sales of models, analytical tools, algorithms and information services. Another is healthcare, where there are ambiguities about the ownership and control of some data.Footnote 5

The government can also support the Dutch AI identity through a strategic procurement policy. As a major economic actor, it is in position to stimulate markets by building demand for certain products. PIANOo, the Dutch Public Procurement Expertise Centre, is currently developing an innovation-focused procurement policy. In 2019 the government launched SBIR (Small Business Innovation Research) to encourage businesses to develop innovative AI applications for the public sector. The government could make more intensive use of these instruments. Moreover, procurement policy is fragmented in many areas, from education to local government. Central government can strengthen the development of the AI ecosystem and focus more on areas of application important for the Netherlands by targeting the use of procurement instruments and co-ordinating the underlying requirements and standards.

Concrete Actions for Recommendation 3

  • Define the domains and focal areas of a national AI identity.

  • Identify the technical requirements and opportunities in each of these domains.

  • Help shape the national AI identity by adapting procurement policy accordingly.

The transition from abstraction to application is not just about the technological context of AI, but also its behavioural and user contexts. We therefore make the following recommendation for this social ecosystem.

Recommendation 4

Strengthen the skills and critical capabilities of individuals working with AI systems by developing a suitable training and certification framework.

The WRR believes that more attention should be paid to human-machine interaction. Even where technical systems work properly and comply with ethical guidelines, a lot can still go wrong in practice. For example, because users do not know how to manage these systems or fail to critically evaluate their functioning. An important factor to consider here is that AI transforms existing working practices, changing the role of the human user and possibly rendering traditional safeguards inadequate. We may demand that a human user must always be responsible for decisions (‘in the loop’ or ‘on the loop’Footnote 6), but we must also ask if this is a meaningful and realistic stipulation.

In an autonomous vehicle, for instance, given how long it takes a human to respond, the driver cannot be expected to intervene in time to prevent an accident. The same applies to humans who are required to oversee, interpret and manage increasingly complex analytical methods. People accustomed to algorithms functioning correctly are disinclined to question their results (automation bias), especially under pressure. While a person is still responsible in name, and so the human factor is still present, their putative role no longer corresponds with what is actually happening in practice.

One specific issue of human-machine interaction is how to approach the fallibility of both the human and the computer. If they come to different conclusions, it can be difficult to judge which is right. A human may be able to rectify an algorithm’s error, but an algorithm can likewise discover patterns that a human being will not consider or expect. So how might we organize the use of AI so that it is possible for a human to correct the machine and vice versa?

Box 10.4: Augmented Intelligence

Algorithms already exist to advise the police to patrol certain neighbourhoods and help teachers when streaming their pupils. But they can make mistakes. The algorithm in the Crime Anticipation System (a predictive policing tool used by the Dutch police) deployed officers to public parks to combat car theft. Its reasoning was that this crime tends to occur where people gather, and people gather in parks. The problem, of course, is that cars are not allowed in parks and so anyone with a modicum of common sense would reject that advice. But there are other cases where an algorithm may well discover a pattern of crime that humans have not yet thought of.

Similarly, teachers should not simply ignore the results of streaming algorithms but nor should they trust them blindly (automation bias).

So, we need to create a context in which the teacher or police officer is supported in their work while at the same time the fallibility of both human and machine are considered. In other words, rather than replacing human intelligence AI should instead augment and enhance it (‘augmented’ or ‘hybrid’ intelligence).

The interpretation of AI outputs also involves human-machine interaction (see Box 10.4). For example, users need to understand the nature of the information generated by the system. Which in turn requires knowledge of the difference between correlation and causality, of margins of error and of whether a specific algorithm generates more false positives or false negatives. Users must thus be provided with information about the capabilities and limitations of the systems they work with.

Various actors have a contribution to make here, with the various AI labs in the Netherlands in a good position to play a key role. The government can also help by being actively encouraging (as well as actually participating in some cases, as it does with the Police Lab). In particular, it needs to pay more attention to the dynamics of human-machine interaction in its own use of AI. But it should also consider the behavioural context, to the requirements for using AI in its internal audit and supervision processes and to the application of guidelines.

To ensure effective human-machine interaction and strengthen the skills of the people working with AI, a system of training and accreditation for both humans and machines should be established. This could include certification, licences and specific requirements for certain applications of AI. The European draft AI Law, which distinguishes various levels of risk, provides a good starting point for the necessary requirements. Licensing procedures could be established by analogy with the system of licensing and approval used by health agencies to safeguard how new drugs are brought to market. This also makes patient information leaflets compulsory, so that those prescribed the drugs can read about their side effects and possible risks. Certification is used in a wide variety of situations, from sustainable food production to compliance with standards for the use of chemicals (under the European REACH regulation for the registration, evaluation, authorization and restriction of chemicals). Organizations that meet the standards receive are certified by the competent body.

Effective human-machine interaction requires a system of certification not only at the product or organization level, but also for individual users. In various fields people who use certain technologies or have certain responsibilities are required to be certified. Obvious examples include electricians qualified to work on wiring in a building and the registration of professionals in healthcare, but all manner of other professionals also require certificates. Chartered accountants, for instance. In addition, many jobs (in the public as well as the private sector) require their holders to prove that they satisfy certain continuing education requirements. These are all forms of documentation that attest to a person’s proficiency in their work.

The WRR is not proposing that everyone involved with AI should be trained and hold a certificate or licence. Everyone is affected by electricity, another system technology, but only technicians with special responsibilities need to be certified to work with it. AI will likewise affect almost everyone, but only those who work actively with the technology or are responsible for its deployment should need to demonstrate they have acquired the necessary knowledge and skills. We also wish to emphasize that this is not just about possessing sufficient technical know-how, but also the ability to determine whether the necessary safeguards are being observed.

Concrete Actions for Recommendation 4

  • Pay explicit attention to the behavioural context and human-machine interaction in audits, supervision and the use of guidelines.

  • In addition to certification, licences and risk levels aimed specifically at AI systems and organizations, develop measures to guarantee that the people responsible for the technology possess the requisite knowledge and skills –a proficiency certificate or AI licence, for example (see Box 10.5).

Box 10.5: AI Licences

More research is required to determine how AI licences might work, who would need them and whether they should be made compulsory. Here we offer a number of points to consider.

  • Look at existing forms of proficiency certification, such as the register of medical professionals, pilot’s licences and the certification of mechanics, and whether similar approaches might be appropriate in AI.

  • Who exactly needs to obtain certification: the developer, the deploying company or institution or the individual end user? This will vary according to the context; AI in the form of a healthcare robot will require a different approach than a purely algorithmic application.

  • The relevant training programme should include a theoretical component. Its primary focus, however, should be AI in practice. How should it be used? What do users need to be aware of? How are the safeguards monitored? Above all, trainees should be given plenty of opportunities to practise. What can you do with the technology? Just as a diver certainly requires theoretical knowledge in order to be able to plunge safely into the depths, but first and foremost plenty of practical training, so AI certification should entail quite a lot more than the existing courses provides – which is mainly general knowledge and basic theory.

  • Practical knowledge of AI should also include a set of procedures that need to be carried out in complex situations or in the event of an emergency, much as medical standards exist for specific procedures in healthcare. Furthermore, users of these systems need to know when they can and may resolve issues themselves and when they need to seek the help of an expert.

  • Given AI’s enormous dynamism, it is advisable to require some form of continuing education for all holders of AI certification.

4 Transition 3: From Monologue to Dialogue

Engagement, our third overarching task, requires a transition from monologue to dialogue. The monologue here is the current situation in which discussion of AI is dominated by a relatively monodisciplinary group of technical specialists when in fact all manner of other actors and organizations should also be involved. The great distance between the developers of AI systems and the social environment in which those systems are applied also has the characteristics of a monologue. Citizens and civil society actors have their own expertise to contribute, but in addition an important role in providing feedback on how AI systems function in practice. In short, the conversation about the design and application of AI must be joined by a greater variety of actors. The Dutch government is already undertaking political initiatives to involve civil society in the development and application of AI-based applications. Illustrative of this is its declared intention to “encourage the business community and consumer organizations to jointly draw up a code of conduct for the use of consumer data and algorithms to influence purchasing behaviour”.Footnote 7 However, consumer organizations and other bodies representing citizens’ rights and interests can only fulfil their role if they have the capacity to do so. So, to effectuate the transition from monologue to dialogue, our first recommendation to government with regard to engagement is as follows.

Recommendation 5

Strengthen the capacity of civil society organizations to expand their work into the digital domain in general and AI in particular.

A number of parties in civil society already have a good grasp of the issues surrounding AI. This obviously applies to organizations engaged explicitly with the digital domain. In the Netherlands these include Waag, Bits of Freedom and Privacy First. These groups are increasingly managing to reach the general public and to put issues involving AI on the political agenda. Major human rights organizations like Amnesty International are now also paying close attention to the impact of this technology. Unfortunately, the same cannot be said of most organizations that focus on the interests of specific groups (employees, patients, teachers, people in poverty, disadvantaged and discriminated groups and so on).

Bodies like trade union federation FNV, patient advocacy group De Cliëntenraad, anti-discrimination think tank Artikel 1 and tenants’ union Woonbond do important work for specific groups in Dutch society. AI offers new opportunities for these organizations, but it could also threaten – and even damage – their position and that of the people they represent. Examples of such threats are the spectres of a ‘digital poorhouse’ to the detriment of impoverished people, a ‘New Jim Code’ that disadvantages people of colour and ‘digital open-air prisons’ that restrict the freedoms of minorities. It is therefore important that organizations of this kind be empowered to understand and address these effects. Moreover, their specific knowledge is indispensable for the further integration of AI into society. But that knowledge is currently absent from many discussions around this theme, one major reason for that being that these bodies tend to know little about the technology.

The government is responsible for upholding a strong democracy and so needs to ensure that diverse voices are heard on important issues. When a new system technology is introduced, civil society usually lags behind big business and government in its adoption. Yet grassroots voices are crucial when it comes to reporting abuses of the technology and finding new ways to exploit it on behalf of a whole variety of interest groups. The algorithm register mentioned earlier can mitigate this deficiency by making knowledge about AI use publicly available. In addition, it is important that the government actively approach and consult interest groups as part of its AI policy.

The government can also contribute to a more prominent role for civil society by providing grants and facilitating training programmes or partnerships. Nor should the formal and institutional mechanisms that engage particular interest groups in the democratic process be overlooked. In particular, we are referring here to the need to involve works councils and other codetermination bodies in AI-related decisions. Whilst Dutch law stipulates that employers need the consent of their works councils before processing employees’ personal data, the specific workplace implications of AI – in the form of staff monitoring systems, for instance – have not yet been adequately addressed by those councils.

Concrete Actions for Recommendation 5

  • Include AI literacy in funding policy and training programmes.

  • Encourage co-operation between interest groups and suchlike organizations in the digital domain.

  • Inform civil society stakeholders of the various ways they can engage with decision-making around the use of AI, such as through co-determination forums.

  • Involve interest groups in political decision-making about AI policy and regulations structurally and from an early stage.

The second point in the transition from monologue to dialogue centres on the feedback loop between AI in practice and AI on the drawing board. A lot of attention is paid to the quality and reliability of data used in AI systems and to their analytical methods, their functioning and their transparency – that is, their input and processes – but much less to their outputs. In other words, whether AI does what it is supposed to and does it satisfactorily.Footnote 8 Integrating outcomes into the process by creating feedback loops to developers and other stakeholders would seem to be a logical requirement for AI systems, yet it is not an activity sufficiently rooted in practice. Consequently, our second recommendation in respect of engagement is as follows.

Recommendation 6

Make sure that effective feedback loops exist between AI’s developers, its users and the stakeholders who experience it in practice.

There are various reasons why feedback loops receive relatively little attention. One is that real-life experiments are regularly conducted without the explicit consent of those involved. After the experimental phase, systems are implemented without first undergoing an evaluation of their effectiveness. Of particular relevance here is the fact that AI systems often draw on data about generic groups rather than bespoke information. As a result, the effectiveness of important legal safeguards, such as consent to use data and compensation in the event of malpractice, is significantly reduced.Footnote 9 Another problem is that such applications can engender discrimination against certain groups and yet leave them with few opportunities to defend themselves. Also, once a system’s functionalities and operating instructions of a system have been agreed upon, changes may be required in response to the self-learning process (and the corresponding feedback) that necessitate a reassessment of the entire system. This is especially typical of the government. Such long and complex processes hamper the working of feedback loops.

In addition, as AI transitions from the lab to society the requirements for system feedback change. Systems are often extensively tested in the lab using carefully compiled sets of test data. When these systems are used in practice, in many cases the monitoring and feedback process is much less thorough than in that controlled research environment.

Another reason for a lack of feedback may be that the requisite information is difficult to obtain. For example, an employee recruitment algorithm will not integrate feedback on candidates who have been unjustly rejected as there is no data on how they would have performed had they actually been given the job. Algorithms that provide pupil streaming recommendations require feedback data that will only become available many years later, and even then, the results may be ambiguous (because eventually the student did not follow the recommended trajectory, for instance). If a student achieves better educational outcomes than the algorithm predicted, was it incorrect or did the student ‘up their game’ later in their schooling?

Finally, the commercial interests of developers or contractual agreements between them and user organizations may stand in the way of an effective feedback loop. To facilitate feedback while at the same time ensuring confidentiality, a limited number of persons within the organization could be authorized to monitor the factors relevant for the loop.

Effective feedback is crucial for the proper functioning of AI systems and the protection of civic values. The childcare allowances scandal is a tragic example of what can happen when there is not enough feedback and critical reflection on a system’s output. As the implications of using algorithms for citizens and their legal position increases, it is crucial that feedback about those implications be processed actively. That feedback loop will need to be twofold. First there is a loop between the developer and the user (a GP, a police officer or a teacher, for instance). Barriers all too often exist between these two actors. But a second loop is also needed, taking in everyone affected by the system (the GP’s patients, suspects arrested by the police, a teacher’s pupils and so on). Both users and those affected are in a position to recognize errors, contribute expertise and suggest improvements. So rather than a one-way monologue, a dialogue is needed.

The government must therefore pay more attention to the way these feedback loops are organized and their scope, particularly in the public sector – including local government, executive agencies and especially those domains where decisions have a major impact on citizens. In the WRR’s opinion, the development of a standard for feedback is a prerequisite here.

Concrete Actions for Recommendation 6

  • Identify developers, users and citizens affected by AI systems in different domains and develop effective feedback mechanisms.

  • Make feedback mandatory in government AI applications.

  • Organize feedback in areas involving sensitive information in an indirect manner.

5 Transition 4: From Reaction to Action

An effective approach to regulation requires a transition from reaction to action. By ‘reaction’ we here mean a primarily passive, wait-and-see attitude to legislation, with new laws only introduced in face of acute, often specific issues. The risk here is that legislators both lose sight of the broader effects of AI on society and fail to consider its individual aspects as part of a bigger whole. Issues such as reliability, explainability and transparency are definitely important, but the decisive one is how to integrate AI in society. For the transition to an action-based approach, our recommendation for the short term is that the legislature assume a more active role, address relevant developments from a more integrated perspective and develop legislation relevant to an economic and social context in which AI is maturing. In addition to regulating the operation of the technology itself, lawmakers also need to focus on the other dynamics and economic forces associated with AI, such as the growing concentration of power in the hands of a limited number of (mostly private) parties and the consequences this has for AI’s place in society. This transition thus requires that government play a more directive role in organizing the ‘digital living environment’. So our first recommendation for the transition from reaction to action is as follows.

Recommendation 7

Link the regulation of AI to a discussion about the organization of the digital living environment and set a broad legislative agenda.

As we have seen in Chap. 8, various regulatory processes have been set in motion in recent years. These include both national and international initiatives, from European legislation on AI and data use and discussions around facial recognition and autonomous weapons to the regulations and guidelines drawn up by Dutch ministries. The European proposal for an AI Act, to which the Netherlands will eventually be bound, amounts to a concrete proposal for an AI system based on various risk categories. These regulatory processes mostly concern acute and relatively clearly defined issues such as the use of algorithms to combat fraud, bias and discrimination, as well as issues surrounding transparency and unreliable outcomes. Here the debate on regulating AI focuses mainly on the relevance of existing frameworks and whether new regulatory and supervisory institutions will be needed. One question that is not addressed sufficiently is what civic values we want to actively protect or develop, and what steps this will require as we integrate AI into society.

As AI becomes more embedded in our society, second and third-order issues will arise that require new rules for their management. A system technology always gives rise to questions about the effects of its concrete application, and even more so about the associated economic dynamics and their wider effects on society. Electricity and the advent of the motor car, too, forced legislators to consider developments from the perspective of their broader effects on society, in this case the physical environment. Power cables had to be laid above or below ground, and a road network constructed that took account of the natural environment. Embedding AI will involve similar choices concerning the design of the digital living environment, a phenomenon that already encompasses many aspects of society. The WRR believes strongly that the government regulation of AI should involve more than just the technology itself and its applications (reliability, safety, transparency and so so), but also encompass the wider digital living environment.

The development of earlier system technologies teaches us that the role of government will grow as AI becomes more integrated into society. It would be prudent to take on this greater role sooner rather than later. The European proposal for an AI Act regulates the authorization of AI applications in the member states but, because it focuses on managing risks, leaves many matters unaddressed. The WRR recognizes that the potentially ubiquitous nature of AI will make it difficult to anticipate what frameworks will be threatened or otherwise require modification. In many cases this will only become clear over the course of time. But the legislature cannot afford to sit back and wait – the public and other interests at stake are too great. Lawmakers need to stay abreast of the latest developments to be able to respond in good time. To this end the government must not only invest in research and in monitoring those developments (as official regulatory bodies currently do), it should also dare to take concrete steps. The new and therefore somewhat uncertain nature of AI should not be overestimated. Already obvious ambiguities and tensions related to the existing legal frameworks can be rectified or eliminated fairly easily, which will benefit the ongoing process of embedding AI in society. Uncertainty about the applicability of the existing frameworks, after all, as well as points of legal contention such as what data may be used, currently pose obstacles to the technology’s broader application. It is better to make these choices now, because otherwise we as a society could be faced with a fait accompli.

In the WRR’s opinion, the most urgent task for government is to take the initiative and plan for the long-term development of AI and the management of its broader social effects. This involves issues such as the goals we want to pursue as a society and the question of where, for what purpose and under what conditions we want to use AI – including restrictions or even bans in certain domains (as also proposed in the European proposal for an AI Act). The opportunities society can derive from AI deserve particular attention here. Like electricity, AI is not only an economic good but can also benefit large groups in society or even the entire population. The introduction of electricity made the days longer, homes safer, cities cleaner and life more enjoyable in many ways. Similar advantages may be expected from AI. The challenge for government is to ensure that the technology is deployed where it can contribute the most, on a scale and for purposes congruent with the needs of Dutch society.

This discussion requires thorough consideration of sometimes conflicting civic values, a task that cannot and must not be left exclusively to technical experts and tech firms. Perhaps even more important for government than asking whether the existing frameworks are adequate for the challenges ahead or whether AI in fact requires new rules is the task of forming a clear picture of the organizational issues involved in embedding this system technology in our society, including the role to be played by official regulation.

The WRR believes that a more strategic approach to AI should echo that used until recently for national land-use planning in the Netherlands. This is based on comprehensive long-term policy papers. In the case of AI, the government can also turn to its 1998 policy document on “legislation for the information superhighway” (Nota Wetgeving voor de elektronische snelweg), which set out a strategic vision for the internet by formulating a series of policy challenges and goals, a corresponding governance philosophy, a toolkit and an implementation plan (see Box 10.6).

Box 10.6: Legislating for the Information Superhighway

The 1998 policy paper on “legislation for the information superhighway” (Nota Wetgeving voor de elektronische snelweg) presented the then Dutch government’s perspective on regulation of the internet. It was based on an extensive study of the internet’s impact on the Dutch legislative environment. As well as exploratory technical, governance and legal surveys and a comparative international legal review of the internet, this also included a discussion of strategic themes such as internationalization and jurisdiction, reliability, markets and law enforcement.

The policy paper provided a framework of reference to give all actors in the process a better understanding of pertinent questions related to internet legislation, contained a series of proposals for new and amended statutes (as well as measures to repeal) and suggested possible Dutch input for international forums. To guide the implementation of these proposals, it also presented a prioritized plan of action.

Concrete Actions for Recommendation 7

  • Accept that preparing legislation aimed at integrating AI into society will be a long and sometimes uncertain process. Adapt legislative instruments accordingly, but do not wait too long before acting.

  • Draw up a broad and integrated legislative agenda for AI and the organization of the digital living environment, including specified policy goals, a corresponding governance philosophy, a toolkit and an implementation plan.

  • Include in this agenda a list of legal provisions to explicitly regulate the implications of AI in the short term (covering, for example, automated decision-making, liability, archiving and the legal status of autonomous systems).

  • Strengthen the monitoring role of relevant official regulatory bodies and create a feedback loop with policy and legislation. If necessary, process the results – along with those generated by other actors – in a separate monitor.

Our second recommendation for the transition from reaction to action concerns government’s specific focus on regulating AI as a systemic phenomenon.

Recommendation 8

Use legislation to actively steer developments related to surveillance and data collection, the skewed relationship between public and private interests in the digital domain and concentration of power.

Treating the regulation of AI as a systemic issue – and hence an issue of AI’s integration into society –reveals how the digital living environment needs to be organized accordingly. If government does not actively manage how and by whom AI is used in society, there is a risk that it will eventually be unable to control its development. This requires action in at least three areas.

First, it is important to reduce the public sector’s dependence on private companies. While AI is finding increasing use in the private sector, government is less eager to adopt it due to unfamiliarity with the technology and growing concerns about its use. Some examples illustrate this gap. The police are required to adhere to strict rules when enforcing the law, but what if services or applications become available that allow individual citizens to use facial recognition software to identify criminals? Or take public space, where government is primarily responsible for overseeing the behaviour of individuals and businesses. But with the proliferation of other parties collecting information by means of cameras, drones and sensors, they potentially have access to more information about public space than the authorities themselves. As a result, government could lose some control of areas that fall under its responsibility, an issue that could be augmented by a brain drain of the requisite policymaking knowledge as more third parties use AI. In addition, this could lead to sensitive matters being outsourced – with the consequence that dubious practices are hidden from government view.

Secondly, the growth of mass surveillance, and with it the largely unfocused collection, use and reuse of data, needs to be brought to a halt. Here too, of course, the relationship between the social costs of surveillance and data use and their benefits could be examined on a case-by-case basis (as currently), and various safeguards could be put in place for individual applications – varying from facial recognition and influencing online behaviour to smart applications in homes.Footnote 10 But there is also a more structural component of this development: tracking people – including their behaviour and even emotions or unique DNA characteristics – has become an important part of the business model of numerous companies, including online platforms.Footnote 11 The internet economy is increasingly underpinned by various forms of surveillance. AI can be seen as the next phase in this development, since it enables companies to track individuals, attach profiles to them and respond to their preferences. Also relevant is the strong increase in and distribution of digital devices that facilitate tracking, which is the reason why major technology companies are entering the market for smart consumer electronics or forming alliances with the manufacturing industry. Surveillance activities – including those by government itself – have a major impact on the use and perceptions of AI and raise questions about how companies utilize data and how the relationship between governments and citizens is affected. AI can never acquire a legitimate place in society if we cannot find a better way to protect civic values such as privacy, individual autonomy, security and democratic control.

Finally, another issue for the further development of AI is the far-reaching concentration of power within a limited number of technology companies – in particular, a small group of American ‘tech giants’ including Google, Facebook, Amazon, Microsoft and Apple. All of which also happen to be some of the biggest players in AI. Their power has only increased as a result of the COVID-19 pandemic and the growth of working from home and video conferencing. For example, a very small number of providers completely dominate the supply of certain crucial components to the Dutch higher education sector.Footnote 12 There is increasing worldwide resistance to the power these companies wield from their bases in Silicon Valley, and governments are now starting to act. The European Commission, the US Department of Justice and the UK government are amongst those to have described these firms as a threat to innovation, competition and privacy. In addition, the way they filter and disseminate information is increasingly seen as a serious political threat, not just to vulnerable democratic governments but even to established democracies such as the Netherlands.

The major technology companies have the capacity and resources to determine the direction in which AI is developed and used. Moreover, network effects allow them to play an important role in other sectors too. Activities driven not by democratic values but solely by commercial interests. Their position and power are particularly problematic when the services they provide become part of the social infrastructure. How the power of the big technology companies will be restricted is remains unclear, but the history of system technologies teaches us that monopolies are typically either broken up or forced to open up their infrastructures to others. Various proposals to this end are currently in circulation.Footnote 13 The most concrete to date have been tabled by the European Commission, which has drawn the contours of a coherent European internet law with its draft Digital Markets Act and Digital Services Act.Footnote 14 The WRR advises the Dutch government to contribute actively to these proposals and to provide input where necessary. The Netherlands can also take its own independent steps in this regard by adapting competition legislation and the regulation of data power. More effective use of public procurement policy (already mentioned as one of the concrete actions arising out of recommendation 3) could also be a means to encourage a greater diversity of suppliers of products and services.

In addition to these proposals aimed at limiting the power of the industry and ensuring a well-functioning market, there are also initiatives aimed at reducing dependence on private suppliers and developing alternatives with public funds. One example is the EU initiative described in Chaps. 5 and 9 to develop its own cloud services and AI centres (including in the Netherlands), as well as the AI4EU platform. There are also more far-reaching projects on a smaller scale, such as the establishment of digital utilities for electronic identification. A utility of this kind could also be considered for AI – for example, as part of the national AI identity mentioned earlier and its supporting technical infrastructure. An important facet of such initiatives is that their development can be rooted in civic values. This is particularly relevant for public sectors such as healthcare and education.

The WRR’s primary concern regarding the transition from reaction to action is that government must realize that regulating AI alone will not be enough; it also needs to act in many other areas to ensure that the use of AI at least upholds, and preferably reinforces, a whole raft of civic values. If it remains insufficiently aware of this and fails to take up its broader task in good time, there is a risk that other interests and parties will take the lead in embedding AI in our society. It is unrealistic to think that that path can still be changed after the ‘moment of closure’ discussed in Chap. 8.

Concrete Actions for Recommendation 8

  • Guarantee and secure government control over core digital facilities, if necessary, building them in-house, in critical domains for the Dutch AI identity and public sectors including healthcare and education.

  • Review legislative policy on surveillance in light of the fact that AI is the next stage in the development of surveillance technology.

  • Deploy available procurement instruments on a much larger scale to safeguard civic values. Ensure that such instruments do not favour the major technology companies.

  • Actively contribute to European legislation and related initiatives for the regulation of AI and the wider digital environment.

  • Accelerate the process of amending competition law, in particular where it affects the data economy and AI companies.

6 Transition 5: From Nation to Network

Finally, the task of positioning requires a transition from ‘nation’ to ‘network’. What this amounts to is that we must not consider AI merely as a zero-sum competition with other countries but also need to work on building stronger ties with partner nations. This applies in particular to the member states of the EU. The transition here also involves considering national security not just as response to external threats, since it also encompasses the technologies citizens use in their daily lives. To fully understand the security threats we face, we need to shift our attention to the international network we form part of. The WRR proposes that rather clinging on to the idea that the Netherlands is in competition with other countries to build prosperity and power (as a nation), we should focus more on ties with other countries (as part of a network). As regards the economic component of this task, our recommendation is as follows.

Recommendation 9

Strengthen the competitiveness of the Netherlands through ‘AI diplomacy’ that focuses on international co-operation, in particular within the EU.

Governments and businesses worldwide are investing heavily in AI to strengthen their competitiveness. There is far-reaching international competition in all aspects of AI, not only in the form of large-scale public and private investment but also in the development and retention of talent. The Netherlands cannot afford to fall behind here, because many neighbouring countries are already making substantial investments in these activities.

However, the WRR does advise that, rather than simply ‘doing enough to stay in the race’, the Netherlands adopt a somewhat different role and position. More attention should be paid to strengthening competitiveness through international co-operation, by conducting ‘AI diplomacy’ instead of focusing on competition.

A first focal area here could be fundamental research. The European CLAIRE network has chosen to establish its head office in The Hague.Footnote 15 Strengthening partnerships like this could generate positive spin-offs for Dutch business. A good analogy is CERN in Switzerland, where Europe has become a leader in particle physics by pooling its research resources. It is worthwhile taking note of the conditions under which such research collaborations achieve success.Footnote 16

Countries can also co-operate in the development of concrete AI applications. For example, France and Germany have initiated a European data and cloud service called Gaia-X.Footnote 17 The Netherlands joined later, and there is now also a Dutch hub representing our national interests at the European level. Critics may warn that such projects are unfeasible, but in fact Europe has a history of successful technological partnerships including Galileo (Europe’s alternative to GPS) and the aircraft company Airbus. Here again, it would be wise to learn from past successes and failures.Footnote 18 Such partnerships clearly have the potential to strengthen the European position. Failure to participate would represent a lost opportunity to uphold Dutch interests at this level.

Collaboration to strengthen competitiveness could also take the form of more co-ordination between existing companies. The growing interdependence of economic and geopolitical objectives has led to trade disputes involving various digital technologies. Dutch firms including ASML and NXP, which supply important hardware for AI applications, already find themselves subject to the vagaries of US-Chinese trade relations. Similar situations may arise in the future and affect Dutch technology companies like Philips, KPN, TomTom or Adyen, and other European businesses such as Siemens, SAP, Ericsson, Nokia or Dassault. In the light of this contest between the global superpowers, European countries would do well to work together to strengthen their joint international position and so also improve their competitiveness as individual nations – and that of their own companies. Specifically, we should consider policies to protect key business from takeover bids (hostile or friendly) and unwarranted fines or sanctions imposed by trading partners.

Another way in which co-operation can strengthen Dutch competitiveness is through legislation and regulation. The EU is already active here when it comes to personal data (the GDPR) and the draft AI Law of April 2021.Footnote 19 In addition, the process of standardization is crucial. This technical domain has so far received relatively little attention in the AI debate, but is absolutely instrumental in strengthening countries’ competitiveness.Footnote 20 Furthermore, as we have explained in Chap. 9 standardization is increasingly subject to geopolitical forces. China in particular is trying to have its own standards for AI accepted as the norm in international forums. The EU (including the Netherlands) needs to be very alert to this development and seek co-operation with other countries that subscribe to the same values.

While the EU is the appropriate forum for most areas of co-operation, in specific cases like-minded and pioneering third nations such as Canada, France, South Korea or Singapore could be approached as well. When it comes to issues of digitalization, we must be open to broad coalitions involving many countries.Footnote 21

Concrete Actions for Recommendation 9

  • Identify suitable domains and forums co-operation in AI.

  • Explore opportunities to strengthening the Netherlands’ position in each of these domains and forums as part of the Dutch ‘AI identity’.

  • Involve national and international actors such as standardization bodies and prominent academics in the policymaking process.

  • Formulate specific goals for each domain, but also synergies across them – for example, between fundamental research and European projects for AI applications.

  • Be alert to regulatory proposals submitted by other countries that could harm Dutch interests (AI diplomacy).

In short, the Netherlands can strengthen its competitiveness by co-operating internationally in the areas of fundamental research, establishing new services and co-ordinating industry legislation and regulations. The WRR therefore recommends developing an integrated AI diplomacy strategy to facilitate well-considered choices in these domains (including choices for the long term).

The transition from nation to network has a security dimension as well as an economic one. Our recommendation in this respect is as follows.

Recommendation 10

Develop the knowledge required to safeguard the defence of the Netherlands in the AI age. To this end strengthen the nation’s capacity to defend itself in the ‘information war’ and against the export of ‘digital dictatorship’.

The issue of AI’s impact on security often focuses on autonomous weapons. These systems can indeed have far-reaching consequences for security and so the current efforts to control their use are certainly welcome. But AI influences the military domain in other ways as well, such as improving decision-making processes or enabling the analysis of more data. More and more attention is now being paid these aspects, not least within NATO. The WRR wishes to emphasize the importance of a broader perspective here. AI affects security not only in the military sense but also in civil society.

The far-reaching digitalization of society and the economy is making our country more vulnerable to non-military attack. Social media platforms, sensors in the infrastructure, operating systems, communication systems and various other ‘networked’ domains are all potential targets. Cybersecurity is a fast-growing policy domain. In a recent report the WRR argued that more urgent preparations are needed for the phenomenon of ‘digital disruption’. In addition to the infrastructure and networks themselves, greater attention should be paid to the information that flows through them.Footnote 22 Its influence and manipulation fall under what is termed ‘information warfare’. In part this is being fought manually, but increasingly also by means of algorithms.

The WRR points out the need for an integrated approach to this risk. It was long assumed that digital technologies have an inherently democratizing effect. Although they can certainly help foster democracy, various authoritarian regimes have also proven very capable of using them for undemocratic ends. They deploy digitalization, and AI in particular, to strengthen their regimes – for example, by encouraging widespread, centralized and cheap surveillance. Moreover, countries such as China and Russia are increasingly exporting such technologies and so encouraging other states to move further down the road of authoritarianism. But the risks could ultimately affect the Netherlands as well. By using digitalization and AI as instruments of national security, such authoritarian countries have built up strong digital capabilities. The WRR believes that the Netherlands needs to be more aware of this. Moreover, the discussion should go further than only the rollout of 5G and the dangers of doing business with companies like Huawei. There are plenty of other risks, too, such as the import of technology like cameras with facial recognition, smart city technology for monitoring public spaces and new telecom hardware and software for public services. Another is the export of Dutch technologies to countries with authoritarian goals. Finally, campaigns to spread fake news, deepfakes and conspiracy theories in our country are also threats (see Box 10.7).

Box 10.7: AI as a Weapon of Information Warfare

‘Microtargeting’, ‘sentiment analysis’ and ‘natural language processing’ are all examples of techniques that are increasingly being used and threaten our national security. Deepfakes (faked video and audio recordings that are ever harder to distinguish from the real thing) are becoming more and more common. These activities entail risks for individual citizens and for society as a whole, because they encourage distrust, uncertainty and chaos. Such technologies could ultimately even pose a risk to democracy itself.

Several initiatives within the EU are addressing growing concerns about ‘digital sovereignty’. In early 2021 the Dutch Cyber Security Council – an advisory body comprising representatives of the business community, the government and cybersecurity experts – explicitly called for a far more active stance by the national government to maintain its control over democracy, the rule of law and the economic innovation system.Footnote 23 The WRR agrees with the council’s recommendation and advocates that the Netherlands work towards the development of a joint European strategy in this field. The Dutch initiative to collaborate with France and Germany on the establishment of an EU-wide regulatory body and gatekeeper empowered to monitor all mergers and takeovers by major digital platforms is a good first step in this direction.Footnote 24

It is also important for the Netherlands itself to gain a better understanding of how foreign powers deploy information for their own purposes and how this can threaten our democratic system. We then need to strengthen our national capabilities – including in AI – to counteract that threat. It is not obvious how the information war can be won. What is clear, though, is that we have no time to lose: we must build the requisite expertise and make the necessary policy choices as soon as possible. A good first step in the short term is to focus more on threats of this kind in the annual Cyber Security Assessment compiled by the National Co-ordinator for Terrorism and Security.

Concrete Actions for Recommendation 10

  • Identify how different forms of AI, such as microtargeting and deepfakes, are being deployed in the global information war.

  • Prevent the import of technologies of digital dictatorship to the Netherlands and the export of Dutch technologies to countries where they will be used for dictatorial purposes.

  • Further strengthen the digital sovereignty of the Netherlands as part of EU-wide efforts to this end.

  • Systematically include information security risks in the annual Cyber Security Assessment

7 From Instruments to a Policy Infrastructure

The above recommendations concern the work that needs to be done to embed AI in society. Our final recommendation is about the way this work can be supported and focuses on the institutional aspects of government policy on AI.

As mentioned earlier, the history of system technologies teaches us that the role of government in AI will gradually increase in various ways. Railways were originally developed by private companies in the United Kingdom and the United States, but over time government took a more active role. First through regulatory legislation, and eventually in many European countries by becoming a public transport operator itself. The same occurred with electricity, where governments built the networks. So, while the nature of government’s role varies, the extent of its involvement clearly increases. Each time this has happened, a policy infrastructure emerged to co-ordinate the new tasks and discharge the corresponding responsibilities. In the Netherlands, for instance, national public works agency Rijkswaterstaat was entrusted with managing the country’s motorways and various new public bodies were created to oversee road use: the Netherlands Vehicle Authority to issue car registrations and driving licences, the Human Environment and Transport Inspectorate for the safety of taxis, buses and other forms of transport, the Central Office for Motor Vehicle Driver Testing and so on.

We expect a similar pattern to emerge for AI. This means that integrating it into society will require government to do more than only develop new instruments. In the coming years it will also have to build a policy infrastructure. For this transition, our final recommendation is as follows.

Final Recommendation

Build a policy infrastructure for AI, starting with a co-ordination centre that is anchored politically in a ministerial subcommittee.

The need for a policy infrastructure is becoming increasingly clearer. Like previous system technologies, AI will influence a variety of both sector-specific and generic civic values. In time both the risks and opportunities for those values will come into sharper focus. AI will also increasingly necessitate a debate about the goals we want to pursue as a society and the question of where, for what purpose and under what conditions we want to use this technology. Furthermore, it will require international co-operation, particularly within the EU. So the government will become increasingly involved in its development. In addition, the WRR notes ever greater recognition of AI’s strategic importance – a factor that also calls for an active government role. These developments reveal the need for wide-ranging and generally available resources to support the underlying process of policymaking and legislation.

The discussion on a policy infrastructure for AI is in fact already underway in the Netherlands. For example, a Ministry for DigitalizationFootnote 25 has been proposed that would include AI. There are also calls to establish a supervisory body for algorithms. Various countries have already passed the stage of conceptualization and have launched concrete initiatives to embed AI institutionally (see Box 10.8). The WRR advises the Netherlands to follow suit.

Box 10.8: Countries Already Embedding AI Institutionally

Several countries, amongst them Belgium, the UK, France and Germany, have set up committees bringing together experts from academia, the industry and government to develop their national AI strategies. The argument posited for this broad composition is that AI will eventually affect all sectors of society, and hence also all ministerial remits.

Whereas these bodies were often initially temporary and external to government, some are now permanent organizations in the form of advisory committees (in Austria and Singapore, for instance), government task forces (Kenya, India and others) or initiatives entrusted with responsibility for AI (such as the National Robotics Initiative in the US, which is supported by various government organizations). The United Arab Emirates is the only country to have a Ministry for AI, as the country wants to be at the global forefront of AI in sectors like transport, healthcare, renewable energy and transport, and even build houses on Mars before 2117.

The UK has set up an Office for Artificial Intelligence to implement its own AI mission and the associated ‘Data Grand Challenge’. This body falls under the Department for Digital, Culture, Media and Sport and the Department for Business, Energy and Industrial Strategy. Important achievements so far include its Guidelines for AI Procurement and the Guidance on Building and Using Artificial Intelligence in the Public Sector. The Office for Artificial Intelligence and the UK government in general are assisted in the development of AI policy by an independent Council for AI, whose members include AI experts and representatives from the industry, the public sector and academia. This body is also working to broaden public knowledge of AI.

The governments of various countries are taking steps to develop a policy infrastructure for AI, but there is no one blueprint for this. Some differences have to do with the missions of the relevant bodies, their composition and competence, and above all how they are anchored in the government organization. Prior to the advent of AI, some countries had already established an agency (Denmark) or appointed a minister (Norway, Sweden, Germany, Italy) or undersecretary (France, Belgium) for digitalization or digital government – a responsibility that now also includes AI. In any case, it is clear that the Netherlands should look to other countries for inspiration in creating a Dutch AI policy infrastructure (see Box 10.9).

But there is another development that puts the need for a policy infrastructure on the agenda: the EU’s draft AI Law requires member states to designate one or more national competent authorities to supervise the application and implementation of AI and to designate a single national supervisory authority as the official contact point for the public and other actors. This authority will also represent the relevant member state on the European Artificial Intelligence Board, the body that will implement the law.

In short, come what may the Netherlands is going to have to develop a policy infrastructure to meet the EU requirements. As for the next step, the WRR considers it premature at this stage to advocate a separate ministry or a specific regulatory body for AI. Both options may prove valuable at a later stage, but at present it remains insufficiently clear what their added value might be. Especially as there is a real possibility of overlap with existing actors. It also takes a lot of time, resources and energy to establish such complex official bodies. Moreover, centralization can create unrealistic expectations with regard to designated tasks and responsibilities. For a central ‘algorithm authority’ to be able to decide what is and is not permissible, for instance, it would require a thorough knowledge of rules, practices and standards in numerous fields, ranging from healthcare to mobility and defence. A near impossible challenge. Given that AI it still in its early stages as a system technology, it remains unclear which issues will demand a more general, overarching approach from government.

Box 10.9: Starting Points for a Dutch AI Policy Infrastructure

The Netherlands already has various forums that can be considered part of an AI policy infrastructure. For example, several ministries now have departments, directorates or separate units for digitalization, such as the Digital Government Directorate and the Digital Economy Directorate.

The Ministry of Economic Affairs and Climate Policy, the Ministry of the Interior and Kingdom Relations and the Ministry of Justice and Security recently formed a partnership for digitalization. These three departments were jointly responsible for the first Dutch Digitalization Strategy in 2018 and the updated versions of 2020 and 2021. Since the new national government took office in 2022, there is now also an undersecretary for Digitalisation at the Ministry of the Interior.

Meanwhile, an interdepartmental working group has been active developing the government’s perspective concerning the impact of digitalization on civic values, human rights and the SAPAI (Strategic Action Plan for Artificial Intelligence). Another such group, for AI specifically, has been formed to bring together government inspectorates and market regulators.

The Netherlands also has a recently overhauled committee of chief information officers, focusing amongst other things on ‘digital transformation and technology-driven innovation’ throughout government. This has developed a generic action plan for information management and installed a government commissioner for that domain. Finally, a Permanent Committee for Digital Affairs was established in the House of Representatives in 2021.

However, none of this means that the WRR is content with the current status quo in policy surrounding AI. Many actors within government are currently faced with AI-related issues but have limited knowledge of how to deal with them. Although some do co-operate in their search for answers, there is no structurally co-ordinated approach.

In recent years a number of audits have been conducted of AI applications currently used in and outside government (both central and at other levels), and several exploratory and advisory studies have considered their relationships with civic values. Many of these exercises, however, only highlight the fact that we are dealing here with a fragmented landscape of participating and responsible bodies. A system technology such as AI requires that the process of knowledge development be permanent and clearly structured, and that the information generated be widely shared and discussed. This is necessary to gain a proper insight into the way in which AI is being integrated into society and what infrastructural issues this raises for government.

The WRR therefore believes that the next step towards a policy infrastructure should be a co-ordination centre for AI, which should discharge a number of functions.

Possible Functions of an AI Co-ordination Centre

  • Platform. The centre facilitates co-operation between government organizations at the policy, implementation and evaluation levels. It also serves as a contact point for international organizations, with a focus on the EU and the European Artificial Intelligence Board.

  • Knowledge. The centre identifies AI initiatives and trajectories already under way within and outside government. This could take the form of an annual monitor of the state of AI in the Netherlands (analogous with the Dutch ‘Monitor of Well-Being’), with the results used to set training priorities, identify bottlenecks and so on, and also reviewed annually by parliament (as the Monitor of Well-Being is).

  • Facilitation. The centre plays a prominent role in facilitating our other recommendations for AI’s integration in society. For example, it could work on the development of an AI licence for government employees and collect ‘better intelligence’ on regulatory issues.

  • Positioning. The centre is an independent body, but in order to stimulate knowledge sharing, co-operation and a coherent policy it falls under one or more national ministries. At the same time, it is important that the centre be fed with knowledge from outside: from academia, industry and so on. To this end an external AI council of prominent experts could be established, which would meet periodically to inform and advise the centre and government in general.

With the further elaboration of these functions, the proposed co-ordination centre could provide policy directorates, supervisory bodies and executive agencies with a structure through which they can interact on a regular basis and on a variety of issues. Because different domains –healthcare, education, agriculture and so on – all have similar questions, they can benefit from learning from each other’s solutions. A co-ordination centre could also help focus on those issues, opportunities and risks of AI most relevant for government. Although the centre need not necessarily focus on overall binding policy – its task initially will simply be to bring together what is happening in AI within government – it can play an important co-ordinating and facilitating role in establishing the broader legislative agenda advocated by the WRR in recommendation 7. The experiences gained can then form a basis to facilitate policy preparation, and perhaps also policy formulation and implementation, in the next phase.

Although the proposed centre will not itself have policymaking authority (at least not initially), it will play a crucial role in this area. Its findings will need be acted upon, and it will be close to the political and public administration arenas. It is therefore important that the centre have political ‘anchorage’ so that policy can be made quickly if necessary, and that be political agreement and backing be available to this end. The Cyber Security Council has previously advocated the creation of a ministerial subcommittee for cyberresilience.Footnote 26 In line with this proposal, the WRR also advises that the government establish such a subcommittee to discuss substantive issues of digitalization that require integrated co-ordination. These can include cyberresilience issues, and certainly also AI. The fact that digitalization has become such a politically sensitive issue is another good reason to set up a ministerial sub-committee (Fig. 10.3).Footnote 27

Fig. 10.3
An illustration of the recommendations of societal embedding is demystification, contextualization, positioning, regulation, and engagement. The text below reads, establish a policymaking infrastructure for A I, including an A I coordination center.

Recommendations by task for AI’s integration into society

8 In Conclusion – The Internal Combustion Engine of the Twenty-First Century

Today the motor car is considered an integral part of our daily lives. It is thus hard to imagine what a revolutionary idea it once was. Let us try to imagine what the situation was some 100 years ago. The internal combustion engine had already been around for a while in 1921, but it was only a few years earlier that Henry Ford had proven his ability to mass produce cars. People did not understand what they were dealing with and called them ‘horseless carriages’. There was also scepticism about the usefulness of motor vehicles, which was not surprising given the many defects they had. Horses continued to be more suitable for many purposes. Moreover, there was no reliable road network to allow the car to function at its best.

In time however, the car would change the face of town and countryside, and our whole way of life. A ‘battle for the streets’ ensued, in which cyclists, pedestrians and those who could not afford a car would eventually be barred from parts of the road network. But the development also contributed towards a new sense of freedom and individuality. Thanks to these changes, the car transformed the way society was organized – and that called for new rules, new measures and new institutions. In addition, the car demanded a new perspective on wider issues concerning the design of the public infrastructure. Both the individual measures and this broader perspective were also required to address all kinds of second-order effects, such as pollution and the risk of accidents. Automotive companies became symbols of progress and the national pride of various countries. During the Second World War the internal combustion engine made its mark on warfare in all kinds of vehicles.

These developments were impossible to foresee in 1921. In retrospect there is no simple answer to the question of how the motor car changed society, and whether that was a good or a bad thing. What is certain is that embedding the automobile in society was, and still is, a painstaking and lengthy process.

One hundred years from today we will take AI for granted just as we now take the car for granted. We cannot yet imagine what kind of world that will be, but once we are there it will be just as difficult to look back a century and imagine how AI began in the lab and then took decades to spread throughout society. We are now on the eve of that process. With the tasks we have identified in this report and the accompanying recommendations for government, the WRR hopes to help smooth the exciting path ahead.

Recommendations

  • Demystification

  1. 1.

    Make learning about AI and its potential applications an explicit goal of government’s public function.

  2. 2.

    Stimulate the development of ‘AI literacy’ amongst the general public, beginning with the establishment of algorithm registers.

  • Contextualization

  1. 3.

    Explicitly choose to develop a national AI identity, then investigate what adjustments this requires to the technical ecosystem in relevant domains.

  2. 4.

    Strengthen the skills and critical capabilities of individuals working with AI systems by developing a suitable training and certification framework.

  • Engagement

  1. 5.

    Strengthen the capacity of civil society organizations to expand their work into the digital domain in general and AI in particular.

  2. 6.

    Make sure that effective feedback loops exist between AI’s developers, its users and the stakeholders who experience it in practice.

  • Regulation

  1. 7.

    Link the regulation of AI to a discussion about the organization of the digital living environment and set a broad legislative agenda.

  2. 8.

    Use legislation to actively steer developments related to surveillance and data collection, the skewed relationship between public and private interests in the digital domain and concentration of power.

  • Positioning

  1. 9.

    Strengthen the competitiveness of the Netherlands through ‘AI diplomacy’ that focuses on international co-operation, in particular within the EU.

  2. 10.

    Develop the knowledge required to safeguard the defence of the Netherlands in the AI age. To this end strengthen the nation’s capacity to defend itself in the ‘information war’ and against the export of ‘digital dictatorship’.

  • Final recommendation

Build a policy infrastructure for AI, starting with a co-ordination centre that is anchored politically in a ministerial subcommittee.