Embedding or integrating AI into society depends on the existence of frameworks, and therefore regulation. Now that the technology is making the transition from the lab to society, its effects on the economy and the society are subject to widespread scrutiny. This has led to debate about the nature of the regulatory measures needed to ensure that AI is properly embedded in society and government processes.Footnote 1

Attention has focused not only on the opportunities, but also particularly on AI’s potential negative consequences. Hundreds of guidelines, codes of conduct, private standards, public-private partnership models and certification schemes have been developed with a view to both promoting opportunities and addressing adverse repercussions.Footnote 2 One of the more important initiatives is the European Commission’s AI Act21.Footnote 3 Moreover, many existing legal provisions and frameworks are potentially applicable to AI, ranging from fundamental rights to liability law, intellectual property rights and the rules on archiving and evidence. In other words, the effects of AI are now controlled by means of a wide range of frameworks and specific rules, many more of which are likely to be laid down in the years ahead.

Formulating desirable and necessary regulations involves not only deciding on the actual content of the norms to be applicable but also a need to determine by means of what specific regulatory instrument these norms will apply (legislation or private arrangements, such as codes of conduct) and the level at which the rules are laid down (international, national, local). In short, the overarching task of regulation relates to question ‘what frameworks are required?’ Because AI is a system technology, that general question has a number of more specific aspects pertaining not only to such matters as the applicability of existing rules and the need for new ones, but also matters of scope and regulatory level. In this chapter we are concerned specifically with regulation arising from the role and position of government (national and international), and particularly the legislature. We also explore the extent to which, in regulating AI, government can and should rely on the engagement of other actors such as the technology companies that introduce AI applications into society.

Specific questions pertaining to the regulation of AI can be divided into two groups, which are considered separately in this chapter. The first concerns the relationship between regulation and the scope for innovation. Because AI is a system technology, government will need to establish what is required on many fronts. After all, many of the implications of AI’s introduction into society remain uncertain and unclear. Government decision-making regarding regulation and its effects must reflect that. This group thus includes the following questions. Do the existing legal rules provide sufficient legal certainty and legal protection? Do those legal rules sufficiently facilitate innovation? And what should be done about the fact that legislation almost always lags behind technological development?

If new rules are deemed necessary, the question of what should be done at the national level and what at the international level then comes into play. As does the question of what can be left to the market and what government should deal with. Although a decision has now clearly been made at the European level to set up a legal framework specifically for the regulation of AI, there remain countless questions – some of them quite fundamental – still not addressed by the proposed European AI regime. For example, it fails to address the potential of algorithmic decision-making as it relates to citizens’ legal position, in that it may restrict or even transform constitutional principles such as the principle of legality.Footnote 4 The issue of what the amended EU Copyright Directive implies for access to the data used to train AI systems is also left unanswered,Footnote 5 along with countless other questions concerning the data used by AI. Other pertinent matters that fall outside the scope of the AI Act include competition and market failure issues in the field of digital services, implications for administrative law, the need to archive algorithms in order to comply with the Dutch Archive ActFootnote 6 and even, in the light of the European Charter of Fundamental Rights, whether the Dutch constitution is up to the challenge of AI.Footnote 7 In short, the European AI regime represents an important step forward but still leaves countless matters to the national legislature. In the first part of this chapter, we therefore consider various generic issues relevant to the available means of regulating AI in particular legislation. We do not consider the substantive legal issues that exist in various domains, but instead concentrate on possible ways the legislature can address them in general terms.

From the history of earlier system technologies, it is clear that the process of their embedding is consistently accompanied by increasing government involvement. That serves as the starting point for the second part of this chapter, where we deal with such aspects as the influence of time on the regulatory frameworks and practical rules governing AI. As a system technology is embedded in a society, courts, regulators, NGOs and parliament all send out signals about the ability of that society or its public sector to manage the process without intervention by the legislature. One point of particular relevance here is society’s and government bodies’ ability to ensure that the applications of a system technology take proper account of public values. In practice, many such signals highlight the need for intervention by the legislature, judiciary or regulators.

Almost all system technologies require increasing government intervention over time, and therefore a more explicit role for legislation. For example, the use of steam engines led to high levels of hazardous air pollution in cities that did not decrease until industry was required to build taller chimneys.Footnote 8 Similarly, government’s role in the regulation of AI is likely to increase over time – and that makes it pertinent to ask how it can prepare. We argue that, at the very least, a broader perspective is needed: the current relatively narrow focus on the technology itself should make way for an outlook that takes in the process of societal embedding and its effects. As well as regulation concerned mainly with the development, characteristics and use of AI, there is a need for regulation that addresses the effects of its integration into society. We also show that as AI increasingly becomes part of our lives, there is a growing need to make fundamental decisions about the design of what we refer to as ‘the digital living environment’. The practical implication is that the regulation debate in the years ahead cannot be confined to matters of reliability, transparency and privacy but must also address broader issues concerning the organization of a society in which AI has a prominent place (see Fig. 8.1). Moreover, that debate must at least involve those actors with the ability to shape the digital living environment and the means they use to do so – particularly data.

Fig. 8.1
An illustration of a half hexagon with five layers that denote various levels of regulation. 1. Technology. 2. Use. 3. Regulating. 4. Structural effects. 5. Digital environment.

Various levels of regulation

1 Government Standardization of AI

By proposing its AI Act, Europe is clearly signalling the need for specific rules to govern this technology. But there are many issues not covered by the act or that remain to be clarified before the proposal passes into law. So, it remains necessary to consider whether existing frameworks are applicable to AI. Will the Netherlands have to amend its General Administrative Law Act, for example, or the many regulations that apply to specific sectors such as care and mobility?

In the Netherlands and beyond, such issues have been the subject of considerable commentary and debate in recent years, resulting in numerous changes to the relevant frameworks.Footnote 9 These matters are outside the scope of this report, however. What we are concerned with are specific questions regarding the means to be used (type, level), particularly ones that stem from the systemic nature of AI.

First there are questions regarding the breadth of AI’s impact. As a system technology, it has the potential to become ubiquitous and to trigger complementary innovations in many areas. In other words, the technology can be utilized in numerous domains and for very wide-ranging purposes. The growing awareness of this fact is illustrated by the research under way into AI’s use in sectors such as healthcare, education and defence, as discussed in Chap. 3. Questions therefore arise regarding the legal and other requirements that the development and deployment of AI in those sectors must meet. Given the technology’s systemic nature, the debate on the required regulation should focus on whether rules should be tailored to the individual sector, the type of AI used or its practical application. In some cases, generic rules not necessarily specific to AI may be sufficient.

In that context, the distinction made by Lyria Bennett Moses in her research on regulation and technological change is useful. She distinguishes four categories of issue that a new technology may raise.Footnote 10 Firstly, it may be necessary to regulate new practices. For example, the General Data Protection Regulation (GDPR) requires that a human being must be involved when an autonomous system processes personal data in a way that has implications for the legal position of a data subject. Secondly, Bennett Moses distinguishes issues that make it necessary to clarify existing rules – because it is unclear whether they apply to the new technology, for instance. After all, many existing rules were not drafted with AI in mind and so may unfairly block the development of new, societally significant applications of the technology. At the EU level there has recently been debate concerning who is legally responsible for the actions of AI.Footnote 11 Can the AI system itself be held responsible?Footnote 12 That question forms part of a broader debate about the attribution of rights and obligations to AI systems.Footnote 13 Historically, the introduction of a system technology has often involved a phase when the applicability of existing rules needed to be clarified. In 1921, for example, the Dutch supreme court had to decide whether electricity could be stolen. After all, its immaterial nature meant that taking it could not be deemed ‘the removal of goods’.Footnote 14

Bennett Moses’ third category relates to issues necessitating regulation to prevent the uncontrolled introduction of high-risk applications. Historical examples include the risk of ‘death by wire’ (electrocution) associated with the increasing density and chaotic installation of power networks in urban centres towards the end of the nineteenth century. That issue was resolved by regulations making private utilities responsible for the safety of the electricity network. Similarly, in its proposed AI Act the EU is seeking to ban certain applications of AI, such as those that exploit vulnerable people and those that involve indiscriminate mass surveillance for law enforcement, social scoring (as with the Chinese government’s social credit system) or harmful manipulation. In the Netherlands, meanwhile, parliament has called for an end to the use of ‘discriminatory algorithms’.Footnote 15

The fourth and final category identified by Bennett Moses comprises issues arising where existing rules are based on assumptions that are invalidated by a new technology, making enforcement of the rules in line with those assumptions inappropriate. Earlier this century, for example, it became necessary to widen the protective scope of the law criminalizing the production of child pornography. Before modern digital editing techniques were available, the relevant legislation (Dutch Criminal Code, Article 240b) was designed to prevent the exploitation of children through child pornography. However, advances in digital image manipulation technology created a situation where pornography could be produced without subjecting the depicted children to actual abuse. The law was therefore amended in 2002 to prohibit the production of images that are harmful to children, even if their production does not involve actual abuse of the subject.

In the years ahead government will have to assess whether the rules that apply within many domains of society and the associated legal domains are appropriate for the new entities, activities and relationships created by AI. The four categories outlined above can be helpful in this regard. If the conclusion is that new or amended regulations are needed, several further questions arise. First, should the regulations be specific or generic? Second, which is most appropriate: a technology-neutral approach or a focused one? The third and fourth questions relate to the appropriate regulatory level and to the actors involved and the means available to them. Below we briefly consider each of these matters in turn, particularly in light of our characterization of AI as a system technology.

1.1 Specific or Generic Policy?

The pervasiveness of AI can lead to a sense that it is best regulated using generic frameworks. This line of thinking is encountered in the debate around transparency and explainability and in that regarding the formation of new regulatory bodies. However, for the reasons outlined below we regard a generic approach as impractical in the long run.

In the debate regarding the regulation of AI, there is particular emphasis on transparency and explainability.Footnote 16 Not only do the workings of the technology, such as its decision rules, need to be explained in a way that people can understand, it also has to be possible to clarify the choices underpinning the use of AI technologies and the actual decisions made by AI systems.Footnote 17 After all, clarity is a prerequisite when ascertaining whether or not citizens’ fundamental and legal rights are being compromised.Footnote 18 Transparency and explainability are also important in determining liability and responsibility for decisions taken by AI – especially if there is a need to understand how an AI system reasons and how particular decisions are reached. But disclosing how an algorithm works, and therefore its operator’s business model, may have competitive disadvantages. Transparency has the potential to distort competition and undermine intellectual property rights.Footnote 19

Moreover, what exactly do transparency and explainability entail? Both concepts are open to interpretation, and both may be pursued for a variety of reasons. The interpretation and objectives adopted have a major bearing on the type of information made available, and to whom. For example, a study undertaken in partnership with Statistics Netherlands has found that scientists tend to interpret explainability as meaning explainable to their peers, not the general public.Footnote 20 Sometimes the context in which AI is used will imply that transparency and explainability are subject to limitations. For instance, when it is used for medical diagnosis and associated treatment. A strict interpretation of the informed consent requirement has implications for the level of detail required of the explanation given by a doctor regarding the basis of an AI system’s recommendations.Footnote 21 Finally in relation to the question of whether generic or specific rules are preferable, we need to consider whether retrospective transparency is sufficient or if prior transparency is also required. The importance of prior transparency varies from one application to another and depends partly on the seriousness of any potential consequences. Demanding it may constrain some applications of AI, such as neural networks. Christopher Reed therefore argues that prior transparency should be required only where AI poses a risk to fundamental rights or where society needs reassurance regarding the safety of its use.Footnote 22

So, although generic transparency and explainability rules may seem sufficient at first sight, specific regulations are often necessary in practice. The judgments of the Dutch Council of State, the nation’s supreme court in administrative law issues, in the so-called ‘Aerius case’ (2017 and 2018) are illustrative in this respect.Footnote 23

Other practical examples demonstrate that numerous factors influence both the requirements for transparency and explainability and their scope. The opportunities presented by AI, and its risks, depend very much on the domains and the organizational context in which algorithms are used.Footnote 24 A fine-collection officer being erroneously prompted by an AI system to call a person who does not have payments outstanding bears no comparison to a traffic accident caused by an autonomous vehicle as a consequence of a system misinterpreting sensor data. Similarly, the moderation process of an online platform, where an algorithm gives advice but does not make decisions, cannot be compared with the algorithmic anonymization of court judgments. According to Stefan Kulk and Stijn van Deursen, such differences between domains, organizational contexts and the associated stakeholder interrelationships mean that it is preferable to tackle problems on a domain-specific basis wherever possible.

As indicated above, the choice between specific and generic regulation is also relevant to the debate regarding regulatory oversight of AI (as has been proposed in the Netherlands, the EU and the US), and the creation of a new overall AI regulator or authority.Footnote 25 In this context too, a generic approach – a general regulator with access to specific expertise – looks attractive. Nevertheless, there are various arguments against it. A regulatory authority needs a defined field of activity and a set of overarching principles as a basis for its oversight. As with other technologies in the early stages of their application, it is not currently possible to define such principles for AI because not enough is yet known about its risks.Footnote 26 Moreover, AI differs from non-system technologies when it comes to the feasibility of designing a supervisory regime suitable for monitoring all possible applications: that would need to have an extraordinarily wide scope yet still be capable of addressing an enormous variety of issues in detail. AI’s applications are highly diverse, and their implications are not always comparable. A regime suitable for autonomous vehicles would not be appropriate for smart refrigerators that order food based on consumption patterns. The challenge, therefore, lies not in oversight policy but in the risk that it remains overly generic and consequently requires the definition of countless exceptions for particular applications. More generally, the potentially enormous mandate of a general AI authority or regulator is also problematic given that, in the Netherlands, legal protections related to supervisory activities are already in need of improvement due to the far-reaching enforcement powers currently at the disposal of the authorities.Footnote 27 Furthermore, delegating multiple tasks to independent agencies (including regulatory bodies) unduly limits scope for democratic control.Footnote 28

Whether generic or specific frameworks should be used for the regulation of AI is a question also picked up by the European Commission’s AI Act, in relation to both the management of risk and the associated supervisory regime (see Box 8.1). The Act is intended to address, as far as possible, situations where AI may pose risks in practice, now or in the near future. However, it also provides for flexible mechanisms that can be adapted as AI develops and new risks emerge.

Box 8.1: General and Specific Frameworks Provided for in the Proposed AI Act

The European Commission distinguishes four categories of risk, each associated with certain AI technologies, purposes and sectors. The implication is that AI technologies and applications cannot all be treated in the same way and do not all have the same impact on society. The Commission has therefore chosen to adopt a specific approach to AI. The next question is which applications should fall under which risk management regime. The ban on biometric identification does not go far enough for some commentators,Footnote 29 while the decision to restrict the ban on social scoring to public organizations has been questioned given that the private sector is heavily involved in the datafied welfare state.Footnote 30 The dual-use nature of certain AI applications is also relevant in this context, as is the fact that AI vendors can design their systems to be modifiable by their purchasers, opening the way for manipulative use. The proposed prohibition on the sale of manipulative AI systems to repressive regimes would therefore be relatively easy to circumvent. Finally, the Commission’s proposals are vulnerable to the fundamental criticism that they pay insufficient attention to the injustice and the damage, both tangible and intangible, that AI systems can do to fundamental rights – making the proposed controls inadequate in that regard.Footnote 31

As for whether policy should be general or specific, we find that question addressed in the Commission’s proposed governance system. For the most part this builds on member states’ existing structures. For example, it envisages each state designating one or more national authorities or regulators to share responsibility for implementing and enforcing the Act. The Commission also proposes that, depending on the sector in which an AI system is to be implemented, regulators should be appointed for that particular sector.

The pervasiveness of AI means that the related legal requirements must consider a wide range of factors and so cannot be generic. Generic frameworks are nevertheless relevant, particularly for the regulation of government use of AI. In that context the Dutch Council of State highlights the importance of cohesive legislation for the protection of citizens’ rights.Footnote 32 With regard to concrete generic frameworks, moreover, both the Council of State and administrative lawyers emphasize the role played by general principles in ensuring good government.Footnote 33 For example, Johan Wolswinkel regards the ‘guidelines on government use of algorithms’ drawn up by the former Minister for Legal Protection as a ‘direct consequence’ of such principles.Footnote 34 Another example of generic regulation, albeit designed for ICT rather than AI, is Franken’s ‘general principles for good ICT use’, formulated in the 1990s.Footnote 35 Almost 30 years on, these principles – availability, confidentiality, integrity, authenticity, flexibility and transparency – are still valid in guiding the search for an appropriate balance between effectively safeguarding civic values and allowing scope for the further development of AI.

Our conclusion, therefore, is that weighing up the respective merits of and choosing between generic and specific approaches is always complex. Thorough exploration of the relevant issues is always important, albeit based on a recognition that regulation primarily requires an appropriate balance between effective safeguarding of civic values and allowing scope for innovation. Moreover, undue emphasis on certain sectors can lead to the neglect of general matters given that AI, as a system technology, cannot be constrained within a particular policy area or legislative domain.Footnote 36 In its fulfilment of this task, government must constantly engage in dialogue as to how specific or general the frameworks regulating AI should be.

1.2 Technology-Specific and Technology-Neutral Rules

The second key issue for the regulation of AI is the extent to which the statutory rules should be neutral or tailored to particular technologies. Technology-neutral regulation has several advantages.Footnote 37 First, it means that rules are generic and can be efficiently applied in different technological contexts. Second, a technology-neutral law or provision is less likely to become obsolete when technology changes. The underlying rationale is that it is easier to determine how such legislation should be applied by referring back to more general principles. So, technology-neutral legislation may be seen as a more futureproof form and therefore suitable for the regulation of AI.

Nevertheless, the systemic nature of AI means that such legislation is not necessarily the best option. First because technology-neutral legislation depends on a good understanding of the working of technologies that are functionally more or less equivalent. Such understanding enables the legislature to define requirements regarding vehicle braking distances without specifying the nature of the braking system to be used, for example. With a new technology, however, such an approach is difficult because its characteristics remain unknown. Also, a new technology may have qualities that require a different balance to be struck between legislative objectives, such as between accuracy and explainability. If explainability is prioritized over accuracy, rule-based AI systems gain an advantage over those that use deep learning. Finally, with a new technology very different solutions may be required to meet the generic objectives of a law, such as the protection of other road users. If cars are one day able to fly, for instance, the whole idea of braking distances might become obsolete. It was probably with such considerations in mind that the European Commission opted for a functional definition of AI in its proposed AI Act, supported by a dynamic list of actual technologies (see Box 8.2).

Box 8.2: Technology-Specific and Technology-Neutral Legislation, and the Proposed AI Act

With its proposed AI Act, the European Commission has sought to create a futureproof regime. It there defines AI as “software that is developed with one or more of [certain] approaches and techniques … and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with.” By focusing on the function of AI systems rather than defining the technology itself, the Commission is aiming to avoid the need to modify the legislative framework as new developments occur.

Nevertheless, the Act does include an annex defining the technologies and approaches that fall within its scope.Footnote 38 One criticism of the Commission’s approach is that, as a consequence of this, the Act’s overall scope is much broader than the fields of application of its more targeted requirements.

Another relevant point is that although AI is a system technology, it is unlike earlier system technologies in certain respects. In Part I we described AI applications as ‘semi-finished products’, which by their nature are constantly changing. Moreover, AI usually exerts an influence over other technologies (computers, communication systems and so on), many of which already operate without human intervention. It is also a technology that is subsumed by, and therefore ‘disappears’ within, society’s everyday processes. These characteristics raise particular questions about autonomy, liability and responsibility, and consequently about the need for forms of regulation that recognize AI’s characteristics and are therefore technology-specific.

Inevitably, technology-specific regulation lags behind innovation: the legislation becomes outdated and requires amendment, which takes time during which innovation continues. It is important to note, however, that a great deal is now known about how to address that challenge.Footnote 39 We should also avoid falling into the trap of thinking that technological innovation and legislation must always move in step. Or as former chief justice of the US Supreme Court Warren Berger put it half a century ago, “It should be understood that it is not the role and function of the law to keep fully in pace with science.”Footnote 40

The choice between technology-specific and technology-neutral legislation must therefore be made on a case-by-case basis.Footnote 41 Again, history teaches us that that is the more or less natural course of events. For example, we have legislation requiring third-party insurance cover that applies specifically to motorists, reflecting the seriousness of the potential consequences of accidents involving motor vehicles, but not cyclists. On the other hand, the Dutch Road Traffic Act applies to all road users, not just drivers. In other words, those of its provisions applicable specifically to motorists operate within a generic framework.

1.3 Framework Levels

The third issue of AI regulation facing government is the level at which frameworks should be established. Because system technologies are by definition universal, they require both national and international policies. Consider, for example, the international arrangements and obligations regarding electricity network voltages and quality, as laid down in the UCTE agreements. More recently numerous global agreements have been made to facilitate the working of the internet, including basic protocols such as TCP/IP, DNS and routing protocols. The recognition that many of the challenges in this field are global has also led to the development of various international consultative mechanisms. Indeed, the proliferation of national AI directives has been accompanied by regulatory convergence at the international level.Footnote 42

In addition to bilateral initiatives on AI, such as those between the EU and Japan, France and Canada and Germany and India, there have been several multilateral ones aimed at the development of common rules. Examples include the OECD’s common ethical principles for AI, based on the concept of ‘trustworthy AI’ developed by the European Commission’s AI HLEG.Footnote 43 In June 2019 the G20 also formulated a set of ethical principles, based largely on the OECD’s.Footnote 44

Other forums are considering rules on AI, too, such as UNESCOFootnote 45 and the Council of Europe.Footnote 46 Meanwhile, various countries are intensifying their efforts in the field of international standardization. Although the organizations active in this area are concerned mainly with the technical aspects of AI, they are increasingly looking to address ethical aspects as well. There are other reasons for seeking international co-operation, especially where standardization is concerned. For example, China has explicitly stated its ambition to be actively involved in the process of global standardization, particularly in the field of facial recognition. In fact, Huawei’s director chairs the ISO’s IEC Joint Technical Committee for IT, one of the world’s most important standardization bodies. The USA and EU have similar ambitions and aim to promote their vision of AI within relevant organizations. Consequently, standardization bodies such as the ISO, the IEEE and the ITU have become battlegrounds where countries strive to have their own standards adopted globally to give their companies a competitive advantage.Footnote 47 For more on this, see Chap. 9.

The choice between applying existing frameworks and developing new ones therefore depends not only on the technology but also on the level at which AI-related issues emerge, not to mention the related strategic ambitions of the country concerned. Here again, the implications of AI’s systemic nature are apparent. Given the association between autonomous weapon systems and warfare, such systems cannot be regulated at the same level as AI-based medical devices. The European Union has long had its own licensing framework for medical devices, in which the Dutch Health and Youth Care Inspectorate and other bodies are involved. Although, as the next chapter makes clear, international positioning is crucial as AI makes the transition from lab to society, the ultimate choice of regulatory level depends on many other – largely national – issues. With its AI Act, the European Commission has clearly signalled that the preferred regulatory level for some issues is the European arena (see Box 8.3).

Box 8.3: Regulatory Levels and the Proposed EU AI Regulation

The European Commission’s decision to define European regulations implies a choice in favour of a directly applicable horizontal regulatory framework for AI, and for high-risk AI applications in particular. The legal basis of the AI Act is provided by the Treaty on the Functioning of the European Union, a pact whose intended purpose is to reinforce the single market. The Commission’s decision was informed to a significant extent by its desire to promote a healthy internal market for AI systems and thus prevent fragmentation.Footnote 48 By assuring the values and fundamental rights recognized by the EU, the AI Act additionally gives the public the confidence to embrace AI applications as well as signalling clearly to companies that only applications that respect those values and rights are welcome in the EU market.Footnote 49 However, the use of European single-market instruments as the basis for the regulation of AI represents a limitation, particularly on AI applications that serve broader societal interests. The reason being that the AI Act merely defines certain minimum requirements designed to manage AI-related risks and problems; no positive ethical framework is provided. The portion of AI’s societal potential that the market cannot realize – without assistance, at least – is not addressed by the act.

Despite the harmonization process that the European legislature has begun, companies and individual citizens still have to contend with regulatory inconsistency amongst member states. This gives rise to uncertainty. While the AI Act does not require implementation in national law, which would inevitably lead to differences between countries, it leaves many issues unaddressed. Given the systemic nature of AI, there remains a need to ensure that, in the international context, legislation is not (or does not become) unduly inconsistent and does not fail to provide adequate legal certainty. Especially in a cross-border context, legal uncertainty as to what rules apply to the digital world constitutes an increasing problem. Not only because rules differ from country to country but also because it is often unclear which rules – that is, which country’s rules – apply. This is the case, for example, when AI systems make use of datasets stored in cloud applications and there is uncertainty as to which country’s law applies because their actual location varies in line with available capacity. On occasions, one country’s rules require something to be done that another country explicitly prohibits. In a preliminary advisory report for the Royal Dutch International Law Association, Australian professor Dan Svantesson warns about such a problematic scenario, which he refers to as ‘hyperregulation’.Footnote 50 Not surprisingly, many lawyers and other commentators have called for a far more uniform global legislative agenda. Europe could take the lead in this regard – by means of a European Digital Rule of Law, for example.Footnote 51

1.4 Actors and How They Exert Control

Finally, the way that power relationships develop is very important when it comes to the question of how AI should be regulated. Where can and should the market play a role, and is self-regulation appropriate? Where is it possible to rely on citizens’ personal responsibility, and where and when is state regulation necessary? In the Netherlands the state’s recognition that there is scope for self-regulation is anchored in formal regulatory guidelines. In other words, self-regulation is an explicit policy option for government.Footnote 52

Technology companies are now under pressure and must accept responsibility for defining rules on the development and use of AI. In this regard the landscape has changed considerably in recent years. Whereas they were previously averse to regulation, nearly all the large tech firms are now responding to mounting criticism by working on codes and guidelines clarifying the rules that AI should meet. In some cases, substantive proposals have also been made, such as the creation of ethical review bodies. Moreover, an increasing number of companies is calling for government regulation – in part because they apparently fear losing market share if they do nothing. Various CEOs publicly expressed their views on this matter around the time of the 2020 global summit in Davos. Google’s Sundar Pichai said that AI required regulation because of its “potentially negative consequences”. Microsoft president Brad Smith (and the company’s chief legal officer) warned that governments should not wait until the technology is mature before acting to regulate its use. Microsoft accordingly set up its own committee to make policy recommendations. Meanwhile, IBM CEO Ginni Rometty announced the launch of an internal research lab to devise policy initiatives. Google did set up an Advanced Technology External Advisory Council (ATEAC), but soon scrapped it following a controversy about its membership. Facebook too announced the formation of an internal Oversight Board, which the media dubbed the company’s ‘supreme court’. In May 2021, for example, this board reviewed the legality of banning former US president Donald Trump from the platform.

Self-regulation is a form of regulation practised within an organization or its operational setting. In its most developed form, self-regulation implies private actors themselves defining, implementing, policing and enforcing appropriate norms and rules.Footnote 53 The tools used in this context may include private standards, voluntary programmes, professional guidelines, codes of conduct, statements of best practice, public-private partnerships and certification programmes. Some people also favour the use of process-based approaches, where ethical principles are programmed into machines and internal supervision is provided.

Self-regulation can have the advantage of increasing the engagement and support of relevant actors, since its principles are defined from the bottom up by the actors themselves. Theoretically, it also facilitates the use of much more precise standards because these are developed by people who know what works in practice. Furthermore, self-regulation need not be the final regulatory mode adopted; it can also serve as a steppingstone to legislation. Partly for this reason, various public authorities apply ‘light-touch’ regulation. In the Netherlands, for example, the Ministry of Justice and Security has defined a set of ‘guidelines on the use of algorithms by government’. For its part, the UK has proposed an ethical code for AI as a means of avoiding the harmful effects of premature legislation.Footnote 54 Generally speaking, the adoption of bottom-up initiatives is a faster process than the implementation of legislation, making them useful for addressing urgent issues. The resulting rules are also easier than legislation to amend or rescind in line with changing circumstances.

But, of course, self-regulation entails a democratic deficit that is potentially problematic in that it may undermine the legitimacy of the rules concerned. Another significant shortcoming is that rules defined privately are more difficult to enforce.Footnote 55 Often, therefore, not all actors subscribe to them. Where AI is concerned, it is mainly benevolent actors that participate in initiatives to ensure conformity with ethical principles and societal values. Meanwhile, more problematic and controversial applications remain unregulated. On top of that, the enormous proliferation of charters, guidelines and the like can make co-ordination difficult. Which document should be followed in a particular case, and what happens if they are mutually contradictory? Who acts as referee in such circumstances?

Gary Marchant predicts that ‘soft law measures’ will become the default in the years ahead because of AI’s rapid development and global spread. The most that government can do is resolve minor problems here and there. According to Marchant, it is therefore necessary to investigate how self-regulatory mechanisms for emerging forms of AI can be indirectly enforced and co-ordinated. Bert-Jaap Koops has made the same point in a slightly older article on ICT regulation. He suggests that pure self-regulation barely exists in practice. More often than not, government also plays a role.Footnote 56 In practice, self-regulation and government regulation frequently coexist, supplementing and reinforcing one another in key respects. In many countries, for example, private actors and governments have collaborated on the formulation of AI strategies.Footnote 57 It is also common for basic standards to be defined in law, but with the details left to sector-specific self-regulation. According to Koops it is necessary to have a combination of consistent government and public pressure, rewards for prosocial behaviour and monitoring mechanisms to ensure that measures are more than mere window-dressing.

This point is very important in relation to the development of AI, now that it is becoming clear that self-regulation is not sufficient to deal with many issues. As it continues to develop, AI still faces numerous technical challenges that complicate its trustworthy application. Self-regulation assumes that developers are able to bring products to market that meet industry standards, including trustworthiness criteria. According to AI researchers Gary Marcus and Ernest Davis, however, that is not always the case. They see the absence of good development practices as particularly problematic here. AI research tends to focus on short-term solutions, such as code that works immediately, but without the layer of technical safeguards seen in other fields. Stress testing is almost unheard of and machine learning systems with adequate risk margins are never applied.Footnote 58 Furthermore, good engineers in other domains always provide their products with fallback options such as duplicate brakes, multiple control systems and fail-safe functions. But these are rare in AI.

The global success of the big technology companies owes much to an approach focused on the large-scale marketing of new but usually unfinished products. Users then take care of further product development and optimization. This development model is diametrically opposed to that used for cars, pharmaceuticals or aeroplanes, which are extensively tested before entering use. Of course, unlike many physical products software can easily be modified and updated remotely; the detection of flaws does not therefore require expensive recalls. But as applications are integrated more and more with real-world processes and – as is the case with AI – come to underpin decisions that have major impacts on people’s lives, the practice of releasing unfinished products entails ever greater risk. As the European Commission argues, there is therefore an increasing range of circumstances in which the use of AI is permissible only if certain basic trust requirements are satisfied.

For self-regulation to be an effective and legitimate option, moreover, AI must also conform to a wider set of principles that reflect society’s expectations and are codified in national and international treaties. Here existing guidelines could be distilled into a set of principles very similar to those used in medical ethics: respect for human autonomy, harm prevention, honesty and explainability.Footnote 59 Those points are already central to the OECD’s common ethical principles for AI and the work of the European Commission’s High-Level Expert Group on Artificial Intelligence. That said, however, the frequent references to medical ethics cannot hide the fact that many of the conditions required for the implementation of those principles are not currently being met.Footnote 60

Unlike in medical science, AI development activities have no common goal comparable with the promotion of patient health and welfare. As explained in Part I AI development is a much newer field than medical practice, with a very short professional history and consequently barely any clearly articulated norms of good conduct. Third, by contrast with medical practitioners AI developers come from a variety of disciplines and professional backgrounds, with divergent histories, cultures, incentive structures and moral obligations. The most closely related established discipline, software development, is not a legally recognized profession with obligations towards society – in part because it lacks a system of licences and clearly defined professional duties of care. The two biggest professional organizations, the Institute of Electrical and Electronic Engineers (IEEE) and the Association of Computing Machinery (ACM), have published and repeatedly revised various codes, but these are relatively concise and theoretical, and they do not include recommendations or specific behavioural norms.Footnote 61

Perhaps the most important difference, though, is that AI development is not governed by any discipline-specific legal or professional accountability mechanisms. At present there is almost no scope to seek redress or remedy. Data breaches and privacy infringements form exceptions in this regard, but that is because these abuses are covered by formal legislation (GDPR). Protection of other values is left to private self-regulation mechanisms, whereby a long-term commitment to upholding those values is by no means assured.Footnote 62 This is particularly problematic now that discussion surrounding AI in the business community and some sections of academia has come to focus primarily on the question of how and under what conditions the technology should be used. The fundamental desirability of such use is barely considered in this debate.Footnote 63

Moreover, it is characteristic of a system technology such as AI that its introduction to society raises issues that transcend the domain of the technology companies and other private actors involved. Tech firms tend to propose technical solutions to problems.Footnote 64 Which is hardly surprising given that that is where their expertise lies. But the scope of such solutions is too limited to tackle these issues effectively. Discrimination, for example, is primarily a societal problem that requires solutions in such domains as institutional access, coupled with a normative debate about what forms of discrimination we find socially acceptable. Secondly, some issues do not operate at the company or application level or cannot be addressed adequately there. Even when companies comply with all relevant legislation and regulations, such second and third-order effects can still arise. One example is changes in employment patterns and the associated need for training. Thirdly, what is actually at issue here is the purposes for which AI may and may not be applied. Should it be used in autonomous weapon systems, for instance? Such matters are not the province of technology companies, at least not exclusively, because of potential conflicts of interest. Fourthly, self-regulation is not an option when human rights and the fundamental standards and values of democracy are at stake,Footnote 65 as various academics argue is the case with AI.Footnote 66

Where government standardization of AI is concerned, therefore, debate should not be confined to the characteristics of the technology itself (is it reliable, safe, transparent and explainable?) and the activities of the companies and organizations that develop and utilize it. A system technology requires a much broader discourse, encompassing such matters as the goals we wish to pursue as a society and hence where, for what purpose and under what conditions we want to use AI,Footnote 67 as well as whether restrictions or even bans on its use in certain domains (as also proposed in the European Commission’s AI Regulation) are needed.

The systemic nature of AI also results in the overlap of societal, political, commercial and research interests. No one actor or group of actors can simultaneously defend all of these. It is therefore impossible, and also undesirable, for a single actor to monopolize the ethics of AI or to dominate the agenda with regard to the regulatory frameworks governing it. In order to prevent the private sector and, to some extent, the academic community defining what constitutes a good AI society, authors such as Corinne Cath believe that a ‘bolder’ strategy is required. They envisage this as addressing the entire spectrum of unique challenges that AI presents for society with regard to fairness, social equality and accountability.Footnote 68 As argued in Chap. 7, the formulation of that strategy should involve all parties affected by AI.Footnote 69 Government itself is of course part of this matrix, since it has the task of considering the big picture and the interests of all the various parties concerned. The proposed AI Act also demonstrates that, after thorough consideration by government, further steps may be required to assure that input is obtained from other parties – during the concretization of legal requirements, for example, or to draw attention to injustice and harm (see Box 8.4).

Box 8.4: Actors and the AI Act

By proposing its AI Act, the European Commission has clearly taken the initiative on regulation in a manner that will influence the course of market developments. Nevertheless, private actors still have a part to play. One aspect of the proposed act that has attracted little comment is that much of the responsibility for regulating AI, in particular high-risk systems, will rest with standardization organizations such as CEN (Comité Européen de Normalisation) and CENELEC (European Committee for Electrotechnical Standardisation).Footnote 70 The act requires any party wishing to market an AI system in the EU to consult certain as yet undefined AI standards. These will include mandates to establish a quality system, draw up technical documentation, organize human supervision and undertake logging.

The standardization process is sensitive to commercial lobbying, however, and a lack of resources and expertise often makes it difficult for interest groups to participate. Consequently, some commentators have expressed concern that the new legislative framework underpinning the proposed act will not adequately protect consumers’ interests. Another important issue is that high-risk applications have numerous implications for fundamental rights, a field in which standardization bodies have limited expertise and experience.

A further criticism is that the AI Act does not establish procedural rights for individual citizens or interest groups, such as the right to complain, seek redress or dispute a decision.Footnote 71 In other fields the existence of such rights has proven an important driver for the development of jurisprudence, particularly when influential companies have appeared to exercise undue influence over policymaking, creating a need for balance. As currently drafted, the act allows only companies that are subject to its requirements to challenge government decisions. Given that AI has implications for fundamental rights, it is pertinent to ask whether and to what extent other parties should also have a say.Footnote 72

In its response to the proposed act, the Dutch government has emphasized the importance of clarity for citizens and consumers as to how they can exercise their rights and has expressed a desire to see appropriate provisions made in specific consumer (and other) regulations.Footnote 73

It is clear from the first part of this chapter that government has a responsibility to regulate the embedding of AI within society. In this regard it should focus on the tools available for use in that context and should address such matters as appropriate regulatory characteristics and levels, as well as the extent to which private actors are willing and able to protect civic values of their own accord.

The scope and intensity of government’s role are separate matters, which are considered in the second part of this chapter. In view of the history of previous system technologies, we argue there that the process of embedding such a technology goes hand in hand with increasing government involvement. In this case that should not be restricted to the regulation of AI and acute AI-related problems but extend to the long-term co-evolution of technology and society, including the associated structural challenges, opportunities and risks.Footnote 74 This implies government regulation as a means of shaping the digital living environment, not to mention interaction between that environment and numerous issues in the physical world. The adoption of such a comprehensive, future-oriented view of regulation is a prerequisite if government is to properly discharge its responsibility to protect civic values.

Key Points – Government Standardization of AI

  • The systemic nature of AI means that it touches on a variety of societal, political, commercial and research interests. Comprehensive consideration, safeguarding civic values and protecting different parties’ interests are possible only if government plays a guiding role.

  • As a system technology, AI is going to become ubiquitous. Government must therefore be able to oversee the full spectrum of societal challenges it presents and to intervene promptly with legislation where necessary. Government should not confine itself to the technology itself or to users’ activities, but also take a broad view encompassing such matters as the interests we wish to pursue as a society and hence where, for what purpose and under what conditions we want to use AI.

  • Government regulation of AI should not take a standard approach. Decisions regarding the regulatory instruments to be used (legislation, self-regulation) and the level at which regulation should take place (international, national, local) will require an appropriate balance to be found between the effective assurance of public values and the provision of scope for innovation.

  • In order to address these challenges with prompt, effective and significant interventions while maintaining policy cohesion, government must adopt a broad legislative strategy.

2 AI Regulation and the Digital Living Environment

In the regulation of earlier system technologies, government intervention gradually increased over time. At first a new technology is often given space to develop. As it enters more widespread use and becomes more deeply embedded in society, however, and as its effects become clearer, more formal requirements often become necessary. When motor cars entered use, their growing prevalence gradually led to more dangerous situations and to traffic accidents, prompting large-scale protests in the US and Europe. In the US the car lobby won the day, and it became the dominant mode of transport. But in Europe public transport systems developed and much more explicit allowance was made for pedestrians to facilitate the mobility of the less well-off.

The effects of a system technology and the opportunities and risks associated with it change gradually over time, therefore. Consequently, the focus of regulation widens from the technology itself to its general effects, such as modification of the dynamics of the economy and the context in which it is used. So, intervention to regulate earlier system technologies was extended to address related matters such as road safety, urban pollution and traffic congestion, the safety of consumer electronics and emissions of greenhouse gases. As these examples show, there are always trade-offs to be made and in this respect companies and pressure groups always seek to influence the embedding process in line with their own interests.

Although government intervention is sure to increase gradually with AI as well, it is not possible to say in advance how extensive and intensive the regulation needs to be. Integrating a system technology into society is a process that spans decades and involves considerable uncertainty, particularly regarding the impact it will have and how regulation can manage that. In this section we begin by considering this uncertainty, which can be the cause of both tardy and premature intervention to prevent problems. The timing of interventions is therefore the second theme we explore. In that context we also reflect on the need for government to be alert to outside influences, particularly market forces. The salient point being that, as a technology becomes more embedded, it becomes increasingly difficult for government to counter or redirect the regulatory influence of other actors.

The final major factor affecting the extent and intensity of government regulation is the interaction between a system technology (in this case AI) and more general developments and challenges impacting society. A system technology shapes society and society shapes the technology. The position adopted by government will have a major bearing on the nature of this interaction and whether it can be linked to developments that at first sight seem to have little to do with AI (climate change, say, or the sustainability of the care system). Our discussion of these three issues leads us to the conclusion that there is an urgent need to regulate not only such factors as privacy, liability, transparency, insurability and consumer protection, but also to organize the digital living environment in a way that will enable AI to support public values in the long term.

2.1 Uncertainty

As the histories of the internal combustion engine and electricity show, regulatory frameworks are not created overnight but often continue evolving for decades. When a system technology has recently made the transition from the lab to society, very specific practical embedding challenges often arise, relating to such matters as liability, insurability and – in the specific case of AI – the performance of legal acts by autonomous systems and copyright on algorithms. In a lot of cases these can be addressed by falling back on and updating existing frameworks, but in many instances the definition of special rules for AI would currently be premature. The same applies to initiatives like the creation of an AI authority or a special AI regulatory body because the sphere of responsibility of such an agency cannot yet be defined. Not enough is known at present about generic patterns in relevant fields such as AI deployment-related risk, legal protection requirements or competition and market regulation.

With a system technology such as AI, government should therefore initially take an incremental approach to regulation. For the management of known risks, existing rules can be clarified or amended fairly soon after introduction of the system technology. Or new rules can be implemented. This is already happening in various AI-related fields, albeit fairly slowly and not on a systematic basis. However, both the complex technological structure of AI and its association with particular usage contexts introduce considerable uncertainty here, with a high degree of complexity and therefore unknown risks.Footnote 75 Their details will become apparent only once AI enters more intensive, large-scale use, and so careful monitoring involving early-warning regimes, error registers and the like is required.Footnote 76

Parties close to developments will typically be the first to become aware of issues and problems associated with the embedding of AI. As well as the community organizations discussed in Chap. 7, courts, supervisory bodies and parliament can all perform an early-warning role.Footnote 77 As community representatives, members of parliament can hear of incidents that may indicate threats to civic values. Courts are asked to rule on cases where litigants have been affected by the use of algorithms, as in the Council of State cases mentioned earlier in this chapter. Inspectorates and regulators are able to observe the introduction of new applications to the market, such as AI-based medical devices and car driver support systems, and have the task of supervising processes in which AI is increasingly being used, such as risk assessment, cybersecurity, social security and logistics.

Such bodies also perform a societal role in the early detection of developments with implications for the protection of public interests and the balance of power.Footnote 78 In the Netherlands, for instance, the Authority for the Financial Markets and DNB (the Dutch Central Bank) have investigated the use of AI in the insurance sectorFootnote 79 and the Court of Audit has examined how the national government deploys algorithms.Footnote 80 The latter concluded that the responsible development of complex automated applications requires better supervision and better quality control, and accordingly developed an assessment framework. A number of inspectorates and market regulators additionally took the initiative to set up an interdepartmental working group to share knowledge and experience of the supervision of AI and algorithms. The more difficult it is to resolve problems within existing frameworks and/or the more generalized those problems become, necessitating the use of generic measures, the more important it is to ensure good feedback of such bodies’ observations to the political and public administration communities.

2.2 Timing of Government Interventions

It is therefore clear that government, more specifically the legislature, should initially proceed cautiously before assuming a more active role in due course. However, the scope for attaching effective requirements to the use of a technology changes over time. Once a technology is firmly established, influencing its use becomes complicated and sometimes even impossible or impractical. That is due to the so-called ‘Collingridge dilemma’ and to actors other than the government guiding the process of technological embedding and thus performing a regulatory role.

The Collingridge dilemma is a governmental information and power problem. It was first formulated by David Collingridge in his 1980 book The Social Control of Technology: “When change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult and time-consuming.” In the early stages, when it is still possible for government to influence the development of a technology, its effects have yet to become apparent and so there is a significant risk that legislation will prove inappropriate, ineffective or even counterproductive. But by the time its effects are manifest, and it is clear what needs to be done, the technology is so firmly embedded that legislating to bring about change involves considerable cost.Footnote 81

Over the years the Collingridge dilemma has attracted considerable attention. One may interpret it as implying that government should not interfere with new technologies, certainly in their early stages. When a technology is in its infancy, it is vulnerable and therefore generally warrants a careful, nurturing approach. At this stage, moreover, its introduction is surrounded by unknowns and uncertainties. At the same time the Collingridge dilemma implies that the opportunity to intervene may be lost if nothing is done until the technology has become pervasive. Those dangers are of course two sides of the same coin: if one starts a race late, one has ground to make up before the finish and that may prove impossible. While there is truth in the Collingridge dilemma, that is somewhat simplistic – which prompted Wendel Wallach to describe it as a dogma.Footnote 82 Collingridge disregards the many forces that influence how a technology is used in practice, at all stages of its societal embedding.

2.3 The Guiding Effect of Technology

Lawrence Lessig’s 1999 book Code is a classic treatise on such forces.Footnote 83 The author argued that digital technology is strongly influenced not only by legislation, market forces and societal standards but also by its technical design – in other words, by its code.Footnote 84 Some years earlier, in his work on the politics of technology, Langdon Winner had demonstrated that the workings of a technology are also a form of regulation.Footnote 85 The same is true of AI. Consider the algorithmic moderation that platforms use to proactively police the online content shared by internet users.Footnote 86

The development and application of AI are also subject to various forces, from the existing legal rules and the private actors that develop the technology to societal concepts of autonomy and human dignity, which influence decision-making in the system design process regarding such matters as the prioritization of output accuracy over explainability. Standardization and the extent to which its tone is set by the private sector are further examples. Furthermore, there is another issue we must also consider in relation to AI: the controlling and therefore guiding role played by humans, and hence their influence over the regulatory power of the technical design, are changing. This aspect is particularly problematic because AI systems are generally non-transparent, complex and self-learning.Footnote 87

The fact that regulation becomes more difficult over time is attributable not to the technology’s deterministic ‘natural’ or ‘unavoidable’ development but to path dependency. This phenomenon is best illustrated by the way the road network operates. When constructing new roads, existing routes are often followed. But many of these are not ideal. They are used nonetheless because the existing urban environment is adapted to them. It would be extremely expensive to move all the homes and businesses along an existing road, for instance. So, when a route is chosen for a new road, the efficiency of the ideal path must be weighed up against a wide range of other interests associated with decisions made in the past, which often prevail. The same process is evident throughout society. For example, the supposed exponential growth of computing power (Moore’s law) is not really a law but a self-fulfilling prophecy – in fact nothing more than an annual goal set for engineers, labs and companies, which then dictates that the digital infrastructure must grow, driving demand for staff, research funding and fast semiconductors.

In such a process there is always a point at which a certain interpretation of the design or use of a technology becomes the norm and ‘closure’ occurs.Footnote 88 It then ceases to be controversial and one of the competing designs is retained while the others are discarded. Around 1900, for example, various types of vehicles were being pioneered, some with electric motors and others with different kinds of internal combustion engine.Footnote 89 Then, over the course of the twentieth century, the petrol-fuelled engine became predominant, partly because of the limited range of electric vehicles, a problem that has yet to be fully resolved. Another familiar example is the bicycle. Nowadays this is an efficient mode of transport with two wheels of the same size.Footnote 90 But it was initially seen as a masculine mode of transport requiring considerable strength and athleticism. Gradually competition developed between the ‘penny farthings’ responsible for that perception and machines resembling the modern bicycle, designed for safety and efficiency. After several decades the modern version prevailed, and others disappeared from use. It should be noted, though, that the quality of a technology is not always the deciding factor in closure. In the 1970s and ’80s, for instance, there was competition between three mutually incompatible video recording standards: VHS, V2000 and Betamax. Marketing factors such as price and support proved decisive in VHS ultimately dominating, rather than the better technical quality of the competing systems.

Closure has far-reaching consequences: alternatives disappear and cease to be the subject of debate. People, relationships, other technologies and existing practices and procedures – once closure occurs, all their interrelationships are redefined. The lesson is that, between the introduction of a technology and that technology becoming firmly embedded in society, there are always one or more tipping points. At those points problems associated with the technology become apparent while there is still an opportunity to address them.Footnote 91 Technological change never comes out of the blue, even in the most dynamic situations. The window of opportunity for action may be very short, or it could remain open for years. The faster technological development proceeds, the less scope for intervention there is. That scope can also be reduced by widespread adoption, as with the mobile phone. Because it is so widely used, this has become an important platform for many other products and services, such as smart domestic appliances and payment services. Various factors can bring about closure, then.

In light of the considerations set out above, we may conclude that if government allows itself to be trapped for too long by the uncertainty paradox that every new system technology creates,Footnote 92 it runs the risk of missing the opportunity for effective and significant intervention. In this regard it is instructive to consider the regulation of the internet, or rather the lack of it, as a cautionary warning.Footnote 93 In such situations failure to act can clearly have major consequences for the proper safeguarding of civic values. When government is inactive, other forces have ample opportunity to control the way the technology is embedded in society. Where the internet is concerned, a handful of American tech companies have been able to drive the development of social media platforms.Footnote 94 When it comes to AI, failure to regulate will give free rein to forces specific to the technology itself, potentially reducing the scope for transparency, explainability and human intervention. The crucial point here is that – consistent with the Collingridge dilemma – the more time passes, the more difficult it gradually becomes to counter the influence of non-governmental forces and to guide AI’s direction of travel.

The EU’s move to regulate various specific AI-related issues is therefore welcome. In relation to the overarching task of regulation as described in this chapter, it is important to note that the Commission has proposed applying rules of varying strictness on the basis of a four-tier risk classification systemFootnote 95 reflecting the function, intended purpose and modalities of the AI application. In other words, the Commission is not focusing exclusively on the technology or operating based on a generic assessment of its use, but instead bearing in mind the specific context. Biometric identification is in principle prohibited, for instance, since it poses specific risks to fundamental rights – in particular human dignity, respect for private and family life, the protection of personal data and non-discrimination – but it may nevertheless be used in certain strictly defined, limited and regulated circumstances such as targeted searches for missing children or the perpetrators of serious crimes. By adopting a risk classification system, European and national authorities should be able to obtain early insight into the problems and risks associated with AI, thus considerably increasing their ability to influence the course of developments. Such an approach may be compared with the early identification of problems associated with introduction of the internal combustion engine (traffic accidents at the start of the car age) or the construction of the first railways (ownership issues, outdated assumptions about rights of way).

The latter conclusion brings us to the final issue associated with the overarching task dealt with in this chapter: what should be on the regulatory agenda? It will be apparent from the considerations set out above that – no matter how uncertain the process of AI’s societal embedding or its outcome may be – government’s regulatory activities in the years ahead must not be limited to the maintenance of existing frameworks and the formulation of principles for the concretization of rules by supervisory authorities or the judiciary. The EU’s proposed AI Act illustrates the need for a broader legislative agenda. The European legislature’s intervention is consistent with the trend of governments, including ours in the Netherlands, being urged by an increasing number of parties to develop sound technology policies.Footnote 96

With consideration for future developments, the overarching task of regulating AI entails more than addressing countless separate issues, risks and advances. With a system technology, it is not only the technology itself and its practical applications that require legislation but also its wider effects on society. It is very important here that government not become mired in the countless separate individual legal issues at play in the many fields where the technology is used, but instead keep sight of the bigger questions associated with its embedding. This requires it to design its regulatory agenda for AI on the basis that embedding the technology is essentially about responsibility for and the design of our digital living environment.

2.4 A Legislative Agenda for the Digital Living Environment

Various historical examples illustrate the need for the legislature to consider AI developments primarily from the perspective of society’s general organization in the years ahead. When motor cars, aeroplanes and electricity were introduced, legislators were obliged to make decisions with a scope that extended far beyond individual policy portfolios. In the cases of cars and electricity, that respectively involved developing new visions of land-use planning and the organization of the physical living environment. Like electricity, AI is both a commodifiable good that can be used to gain economic advantage and a good beneficial to large groups within society, or even the entire population. Electricity extended the day, made homes safer and cities cleaner and improved people’s lives in many ways. AI can be expected to deliver similar benefits. However, that will require government ensuring that the technology is used in places where it can be most valuable and deployed on a scale and for purposes compatible with the aspirations of Dutch society.

Like the car and the aeroplane, AI is a very energy-hungry technology.Footnote 97 It can also be a major cause of pollution in its broadest sense. ‘‘It has a lot to offer, just as industrial innovation, intensive agriculture and the chemicals industry have brought us a great deal – but at considerable cost to the environment, the climate and our planet. An excessive cost, it now turns out. Similarly, innovative digitalization, economic and social progress and growth all involve risk. Here again, not only are the interests of individual citizens and companies at stake, but also collective goods and values.”Footnote 98 In short, as AI’s role in society increases we will see an growing need to make fundamental decisions about the organization of society and the ‘digital living environment’.

Ernst Hirsch Ballin has called for AI policy to be linked particularly to the promotion of resilience over the course of the human lifetime to counter our inherent and inevitable vulnerability as people. In his analysis of the relationship between AI and human rights, Hirsch Ballin formulates three benchmarks for AI policy – two of which are pertinent in relation to the organizational issues being discussed here. The first is that artificial intelligence should be linked primarily to people’s aims in life. In other words, we need to develop AI policies “to correct the complex processes that prevent people from feeding themselves, educating themselves and otherwise developing their life projects”.Footnote 99 The second relevant benchmark is the need for AI’s purposes and design to be “linked to humanity, respect for diversity and support for freely accepted life projects” at all times.Footnote 100

When addressing developments in fields quite distinct from AI that affect and shape society in fundamental ways, the Dutch government has previously brought together numerous separate policy portfolios to view them as an integrated organizational issue. Land-use planning is a good example. In recent decades several administrations have published policy documents setting out tasks and objectives in this area, complete with guiding philosophies, toolkits and implementation plans.

A good Dutch example of a policy document tailored to the challenges of AI and the digital living environment is the 234-page 1998 paper from the Ministry of Justice on ‘legislation for the information superhighway’ (Wetgeving voor elektronische snelweg). Its purpose was “to provide a legitimate basis for government action during the transition to the information society, insofar as legislative instruments lend themselves to that function; to translate that legitimate basis into an assessment framework for the legislature; to differentiate between the physical world and the electronic environment in key areas; [and] to make proposals regarding real-world issues that arise as a result of technological developments.” The current Dutch Digitalization Strategy, which is updated annually, is far less explicit than either of these two documents regarding policy tasks, steering principles and the regulatory toolkit. Furthermore, and unlike the information superhighway paper, it does not address the full breadth of its policy domain, digitalization. Although its declared aim is “a successful digital transition in the Netherlands”, the strategy in fact consists largely of a list of practical action points, most for the public sector, organized under a number of thematic headings.

The risk with such a list-based approach is that broader and often far more fundamental developments receive insufficient attention or are not based on a long-term vision. Below we identify three developments that the WRR considers crucial to AI’s integration into society. Fundamental to all three is the observation that AI is a largely ‘civil technology’, as explained in Chap. 4. That is, a technology developed by the business community and not, or only to a lesser extent, by government or independent researchers. It is also a technology based on large volumes of data, the abundant availability of which is related to the design of the digital world, with the internet centre stage. Moreover, the actors that already dominate the collection, processing and dissemination of data over the internet are also AI’s largest investors and developers.Footnote 101 These factors have a significant bearing on the process of embedding AI in society, and government will have to address them. How it does this in fulfilment of its regulatory task in the years ahead will determine whether AI is embedded in a manner that fully respects fundamental rights and society’s core values.

2.5 Three Developments That Influence the Embedding of AI

We have identified three key developments that will influence AI’s integration and embedding in society. They are increasing surveillance in the public domain, unequal growth in the use of digital resources and the concentration of power within the digital domain, with spill-overs into other areas of society such as its relationship with democracy. A considered government view of these developments is crucial because they have major implications for the ability to regulate AI, be that by government itself or by other actors, in the short and the long term. Moreover, they also shape the relationships associated with the more specific issues raised by the embedding of AI. For example, transparency is important not only so that substantive decisions made by individual AI systems can be understood but also so that the developers and users of such systems can be prevented from exerting undue and unchecked influence over society.

2.6 Surveillance

The first development is the large-scale processing of personal and other data for surveillance purposes, and its use to influence how individuals and companies behave. Although such activities are certainly not new in themselves,Footnote 102 it does not follow that government should be unconcerned about them. The practical scope for surveillance allowed to companies, other organizations and private citizens in the years ahead will be crucial to the direction digital society takes in the longer term. Observation, even covert, already appears to be viewed as increasingly normal by companies, governments and the general public.Footnote 103

Furthermore, when AI is involved observation data becomes more than the mere input and output of a digital system – it actually helps determine the quality of the system’s risk assessments, predictive models and modelling variables and is therefore formative in how the system works. Consider Gijs van Dijck’s research into the quality of the OxRec algorithm used by the Dutch probation service to advise courts on the risk of a suspect reoffending.Footnote 104 Since this was introduced, he argues, its users seem to have allowed themselves to be guided by predictions that are regularly incorrect even though the new system performs no better than its predecessor and also entails a risk of discrimination on the basis of race, class or other social characteristics. Moreover, data shapes not only systems but also policies. For example, there are now calls for a transition from a policy cycle to a data cycle.Footnote 105

Surveillance of citizens, consumers and others has become standard practice for almost all companies and government agencies, as well as many private individuals. Commercial firms now base their business models on surveillance, with the consequence that any restriction of their capability in this area implies lost income. For government the collection and processing of data, particularly personal data, opens the way to monitoring the activities of people and companies in many different arenas.Footnote 106 Another significant point is that, as pointed out previously, personal-data processing now often takes place at the group level as well as the individual level – a practice that existing protection mechanisms are not well designed to address. The point here is not that surveillance is inherently undesirable or dangerous; the real cause of concern is the increasing distortion and imbalance it causes to relationships between citizens, businesses and government. As we elaborate later, that is problematic with regards to control over and access to data, and therefore its wider availability.Footnote 107 Another problem is imbalance in the extent to which actors can influence the collection and further processing of data. In recent decades people’s insight into and control over what happens to their data has been decreasing,Footnote 108 prompting repeated references to a ‘black box society’.Footnote 109 AI is liable to make the black box still more impenetrable, partly as a consequence of inadequate supervision and judicial control.Footnote 110 The reason being that, while AI enables people to monitor ‘each other’, the underlying mechanisms and the business models of the companies supplying applications such as facial recognition-enabled doorbells are hidden from them. Furthermore, AI is making data collection less focused: the definition of selection criteria no longer precedes the collection and processing of data since AI’s great strength is its ability to reveal previously unexpected patterns in large volumes of data.

Another significant point is that not only is the volume of data collected by unfocused methods increasing, but its nature is changing as well. Whereas in previous decades fairly harmless personal details were typically harvested, often through direct interaction with data subjects (asking them to provide certain information), nowadays smart devices gather material about our activities without us even realizing. “In more and more spheres of society, information about people’s physical and behavioural characteristics, such as their faces, voices and emotions, are digitally collected and processed. Such data is intimate information that may relate to private matters such as health, or that may be used to identify a person remotely.”Footnote 111 So, for example, insurers can individualize their risk assessments using data about behaviour, emotion and actions.

AI and the scope it offers for facial recognition and many other new and enhanced functionalities are transforming surveillance activities. Not surprisingly, some of the applications set to be banned by Article 5 of the EU’s proposed AI Act are surveillance-related, including random mass surveillance for law enforcement and social scoring purposes, as in the Chinese government’s social credit system. Such a ban would rightly shift attention from AI itself to a particular field of application. However, it is questionable whether the regulation of AI at the individual application and context level is sufficient. Technology companies go to great lengths to secure people’s attention because their advertising revenues depend on user interest. They are also given incentives to collaborate, since that enables the firms to create ever more precise user profiles. With the help of Google Maps, for instance, Spotify can see what music its users listen to when driving while Google itself can refine its user profiles by incorporating information about their musical tastes. Similar network effects are evident wherever organizations collect data. It is therefore more urgent than ever to consider the desirability of a high-surveillance society.

In the surveillance debate, considerable attention is rightly devoted to privacy implications and to associated issues such as banning the collection of certain types of data (biometrics, for instance), transparency and citizens’ rights.Footnote 112 That debate will need to be broadened and deepened by also considering the revenue models and power of the companies engaged in surveillance.Footnote 113 The emphasis they (supposedly) place on ethical AI should therefore be subject to critical examination. A focus on self-regulation and how it deals with ethical issues risks drawing attention away from underlying structural problems. Once the way something is done becomes the subject of discussion, the question of whether it should be done at all tends to be forgotten. Kate Crawford makes a similar point in her new book, Atlas of AI. She argues that a narrow definition of AI and an abstract debate about good practice serve the interests of big players by ignoring questions of power, capital and governance.Footnote 114 From this she concludes that addressing ethical issues is important, but insufficient. The focus, Crawford asserts, should be less on ethics and more on power.Footnote 115

2.7 Imbalance

Data collection, use, control and quality are increasingly pertinent, therefore, to fundamental issues concerning the organization of society, the way people view that society and the behaviour and position of individuals within it.Footnote 116 Such activities also touch upon international relations, particularly the dependence of the Netherlands and Europe on other regions. Concentration of the growing volume of available data in the hands of a very small number of companies based outside the EU only serves to amplify concerns in this regard. Which brings us to the second key development influencing the embedding of AI, namely the growth of imbalance between the public and private sectors in terms of their interest in, position relative to and influence over the use of digital resources.

At present private actors are primarily responsible for the development, use and circulation of AI, largely because many recent advances have been made by the business community. But in part also because, at least until recently, the world’s governments took a passive approach to regulation of the digital domain. That has led to a growing imbalance between the levels of AI use in the public and private domains, and also increased government’s dependence on private actors for digitalization of the public sector. If government does not start using AI sooner rather than later, its failure to do so will result in higher opportunity costs while also further drawing private actors into the fulfilment of public tasks and increasing the public sector’s dependence on them. Such developments have the potential to erode democratic accountability and ultimately diminish government’s scope to determine its own policies.

For example, the government or organizations in the health or education sectors may find themselves tied to a single vendor, which thus accrues power to dictate what services are provided and on what terms. An ongoing debate concerning the Dutch government’s switch from Microsoft 365 to Google Workspace illustrates this. Although Microsoft has repeatedly promised better user privacy safeguards, the government remains very dubious about its ability to deliver. A switch to Google Workspace nevertheless appears problematic, because Google’s services also entail significant privacy risks. A similar situation exists in the education sector, where G Suite for Education (a variant of G Suite Enterprise, featuring Gmail, Docs and Classroom) is used. A data protection impact assessment (DPIA) for two Dutch universities has highlighted the fact that, where metadata is concerned, Google regards itself as the sole data controller. Meaning that it alone and autonomously determines the purposes for which metadata is collected, and the means used. Furthermore, its privacy agreements state that it may unilaterally change terms and conditions regarding metadata without seeking the user’s consent. Consequently, educational institutions that use Google G Suite for Education have little or no control over such data, which may relate to staff and students at any level of the educational system. The Dutch Data Protection Authority therefore advises government and educational institutions not to use Google G Suite, or to stop doing so if they already have it.

It is not only privacy or such aspects as exclusive rights and therefore control over AI applications, data and algorithms that are put at jeopardy by the imbalance just described. The public sector is also highly dependent on private companies, including some Dutch firms, in other ways, as reflected in the practical influence they exercise over the translation of policy and rules into digitalized implementation processes. A report by the Netherlands Institute for Human Rights has observed that decisions regarding the design of AI systems used by Dutch local authorities are often made not by those bodies themselves but by the system vendors. “That,” the report says, “is leading to standardization and means that the way national rules are interpreted is determined by vendor companies, with implications for the practices of all the local authorities using the software in question.”Footnote 117

The Netherlands is certainly not alone in struggling with dependence on large tech firms. The German federal government has commissioned a market analysis with a view to reducing its reliance on individual software providers. Like many national administrations, the Germans use Microsoft Cloud. But they have decided to gradually reduce their dependency on this system because of Microsoft’s decision to require users to migrate to its cloud-based ‘software as a service’ with effect from 2026. From that date Microsoft will effectively be able to dictate what applications customers are able to use. Furthermore, Germany’s Federal Data Protection Authority instructed all the country’s government departments to close their Facebook pages by the end of 2021. It says that these are not GDPR-compliant and that page administrators cannot therefore fulfil their accountability obligations under Article 5:2 of that regulation.Footnote 118 Facebook reportedly has no plans to make changes with a view to achieving GDPR compliance. Yet, dependence levels vary from country to country. A comparative analysis of EU members states’ university ICT services, for instance, has found that the Netherlands is far more reliant on US cloud service providers than most other European nations, including Germany and France.Footnote 119

A similar imbalance is likely to develop in relation to AI, especially where it is provided as a service. Two forms of dependency are liable to arise: dependency on AI itself and on the supporting technologies needed to make use of it (see Box 8.5). As with earlier technologies, users will probably be offered multiple choices to begin with. Over time, however, the big technology companies may well modify their offerings. The availability of AI applications could be restricted to customers who use the companies’ data hosting services, for example, or subscribe to comprehensive service packages.

Box 8.5: AI and Dependency on Foreign Vendors

High levels of dependency on foreign vendors exist in relation to both AI itself and the supporting technology.Footnote 120 The Netherlands is strong in AI research and development, but dependent in terms of access to AI-related services – especially software packages and online library services, which are largely provided by large technology companies. That is the case with both commercial and free or open-source software. For example, Google and others offer access to image-recognition models on a commercial basis. Access to AI management tools and services – including the Machine Learning Engine on Google Cloud, Azure Machine Learning Studio on Microsoft Azure, Einstein on the Salesforce cloud and IBM Watson ML – is also controlled by commercial actors. The more such software is integrated into Dutch AI products and services, the less the Netherlands is able to safeguard its civic values. A further problem is that there is no guarantee that packages like those mentioned will remain open source.

As far as supporting technology is concerned, the Netherlands is strikingly dependent on the services and products of foreign cloud providers – a situation that poses a risk to the entire AI application value chain. The Dutch market is Europe’s fourth largest for cloud infrastructure. However, its supply side is dominated by overseas providers. Amazon, Microsoft and Google lead the way, with local firm KPN fourth.

There are two reasons for the situation described above. First, as previously explained lock-in phenomena have become commonplace as service providers benefit from network effects. The more users they have, the more data they acquire and the more they can optimize their products. It is therefore in their commercial interests to retain users. Offering and later requiring subscriptions to integrated service packages is one way of doing this; another is to make applications incompatible with those from other providers.

Second, because of their access to large volumes of data the big technology companies are well placed to develop AI and to invest heavily in it. Over the past decade hundreds of smaller AI companies have been acquired by big tech firms. Apple leads the way here with twenty acquisitions, followed by Google (fourteen), Microsoft (ten), Facebook (eight) and Amazon (seven).Footnote 121 This brings us to our third key factor: the concentration of power.

2.8 Concentration of Power

A small number of large US technology companies wield disproportionate influence and are dominating the development of AI. They include the vendors mentioned above, whose services government already uses. As AI becomes more deeply embedded in society, these firms are gaining ever-greater influence over many of its aspects, including political processes and democracy.Footnote 122 One person who has warned of the implications is Paul Nemitz, an adviser to the European Commission and a member of the German federal government’s data ethics committee. He takes the view that because AI’s impact on society is so significant, its use should be subject to democratically legitimized decision-making. In other words, Nemitz’s call for further regulation is justified not by the nature of the technology itself but by the extent of its use and, as a consequence, the excessive power to shape society that is being concentrated in the hands of a small number of companies. This becomes particularly problematic when AI-based services are integrated within society’s infrastructure. It is therefore pertinent to consider whether such services should in fact be considered public goods.Footnote 123

Concentration of power has previously proven problematic in the oil and electricity industries, where initially innovative companies gradually developed into monopolies, leading governments to break them up and either convert them into public utilities or have them continue as smaller commercial entities. Immediately before World War I, GE and Westinghouse in the USA and Siemens and AEG in Germany became the largest companies in the world following mergers designed to increase their scale and access to capital. They even made a pact to divide up the global export market for the electrical technologies and machinery they produced.Footnote 124

In recent years investigative committees in the US, the EU and the UK have turned their attention to the big technology companies, accusing them of abusing their power. The resulting reports warn that there is no longer competition in the market, only competition for the market. Which leads to less innovation, less consumer choice, compromised privacy rights, a weaker press and weaker democracy. The different committees are remarkably consistent in reaching this conclusion.Footnote 125

The Judiciary Committee of the US House of Representatives published a report on Apple, Facebook, Google and Amazon in early October 2020. Its bottom line was that these firms act as gatekeepers for certain distribution channels and abuse that position to deny others access and so maintain their own power. The committee says that a ‘kill zone’ exists around them, which competitors must stay clear of in order to survive. In addition, they abuse their brokerage role to increase their own dominance. Amazon, for instance, utilizes data on businesses that use its cloud services to offer its own competing products.

This report also considers AI. One of its conclusions is that voice-controlled assistants have a clear network effect. All algorithm-based applications learn and improve with use. The more they are used, the better they become. Consequently, user numbers are decisive when it comes to the success of an AI application. Furthermore, access to a combination of big data and AI is enabling the tech giants to enter new markets where the possession and use of data confers an advantage. They are already exploiting this position in the market for ‘smart’ devices, for example: Apple and Amazon (Alexa) sell cut-price virtual assistants that only access or recommend the vendor’s own services.Footnote 126 If this technology eventually becomes the norm for online shopping, the big tech companies will already have a firm grip on the market.

Various proposals to manage such developments are now being debated. But this discussion only highlights the weakness of the instruments currently available to government. Privacy legislation, for example, is the standard vehicle for protecting personal data. Yet serious doubt now exists regarding the sufficiency and effectiveness of that regulatory framework.Footnote 127 There is also criticism of existing competition law, which is seen as overly focused on low prices as the primary indicator of consumer welfare.Footnote 128 This bias is often inappropriate in relation to technology companies, which typically provide some or all of their services without charge and often serve multiple markets simultaneously. Many commentators have therefore argued for a review of competition law and its underlying objectives.Footnote 129 One measure suggested by the US report is the break-up of the big tech companies. Advocates believe that such a move, or the threat of it, would energize the market. Similar strategies have previously been adopted in relation to IBM (whose hardware and software divisions were separated), AT&T (then the world’s largest company, split into eight smaller entities) and Microsoft.Footnote 130

The European Commission is also active in this area,Footnote 131 taking legal action against technology companies that fail to comply with European laws and regulations and imposing increasingly severe fines over a period of years. Two new European legislative instruments have been on the table since the end of 2020: the Digital Services Act (DSA) and the Digital Markets Act (DMA). The first, in full the Proposal for a Regulation on a Single Market for Digital Services, would require large platforms to remove illegal and harmful content without delay and allow users to see how recommendation algorithms work. The DSA is hence intended to improve the liability and security rules applicable to digital platforms, services and products. The largest platforms will be subject to close, systemic scrutiny.

Through the DMA the Commission is also seeking to add new requirements to existing competition law. This measure classifies major technology platforms as ‘digital gatekeepers’,Footnote 132 a status that reflects both their size and their critical role within society. The long-awaited legal assignment of gatekeeper status could prevent big tech companies continuing to evade legislation. If the Commission’s proposal passes into law, it will put an end to the argument that these firms fall into a special category of business not subject to various legal provisions. Spring 2022 a political agreement was reached on both proposals. Whatever final form the new legislation takes, it will fundamentally change digital competition law.Footnote 133

In the Netherlands too, competition law has been a subject of debate for several years. In 2019, for example, Kees Verhoeven (then a member of the Dutch parliament) presented a policy proposal for the modernization of competition rules, amendment of the European and national rules to accommodate data and the definition of new criteria to demarcate the digital market and to determine companies’ share of it.Footnote 134 Various ministers have submitted documents to the House of Representatives concerning the developments outlined above.Footnote 135 In 2019, for example, the undersecretary for Economic Affairs and Climate Policy presented a green paper on ‘The future resilience of competition policy in relation to online platforms’ (Toekomstbestendigheid mededingingsbeleid in relatie tot online platforms). This addressed a number of particular developments relating to the use of algorithms and cartels. “Because consumers’ preferences and financial status can increasingly be accurately gauged using data and algorithms,” it stated, “individualized price discrimination may develop.”Footnote 136 Self-teaching algorithms might thus one day even be able to form cartels without human intervention. Building on this perspective, a bill providing for the modernization and better enforcement of consumer protection rules was subsequently tabled.

The Netherlands, Germany and France have since collectively proposed that all mergers and takeovers by large digital platforms performing a gatekeeper role should be subject to review by an EU regulator.Footnote 137 The mechanism they suggest would supplement and reinforce the supervisory and other provisions of the DMA. Those provisions include an obligation to share data, interoperability requirements and a ban on digital gatekeepers favouring their own products or services in rankings.

Various other proposals to downsize or reduce the influence of big technology companies are under consideration, such as obliging them to make their services and data available to others.Footnote 138 Matters being discussed in this context include interoperability and platform neutrality. The proposal in the latter regard is to follow the existing principle of net neutrality, whereby internet providers are not allowed to price content differently for different users. One fairly recent suggestion is an approach that combines regulation with technological solutions; this would involve the use of ‘middleware’, an intermediate technological layer inserted above a platform to ensure fair competition.Footnote 139

The companies providing the middleware would have the task of editing news and information, for which they would have their own algorithms and would be able to develop their own profiles. Users would then be able to choose between different information channels, whilst the rapid and extensive dissemination of misinformation and fake news picked up by a single platform’s algorithms would be counteracted. The hope is that this approach can address the problems currently caused by platform companies when it comes to spreading fake news and filtering illegal content. It could be difficult to implement such a far-reaching change to the digital infrastructure, though: that would require a new revenue model, co-operation on the part of platform operators and a technical framework that is both compatible with the architectures of the various platforms and enables market diversity.

Exactly how the proposals made by the European Commission and others will eventually be implemented remains uncertain. What is clear, however, is that they will have a major impact on AI’s integration into society. So, as well as having work to do when it comes to the questions the technology raises in respect of current regulatory frameworks, government must also invest in structuring the way we deal with AI itself in order that civic values are properly safeguarded in the long term. Embedding AI within society is thus in essence an issue related to the wider ‘digital living environment’ and as such a fundamental issue that government must address as a matter of urgency. If it fails to do, government may find its scope for action restricted by others taking the lead in determining how and for what purposes AI is used, or by the public losing trust in or even rejecting AI.

Key Points – AI Regulation and the Digital Living Environment

  • Government’s role in the regulation of AI will inevitably increase as the technology enters more widespread use and situations arise that require intervention.

  • The timing of government intervention is crucial. If it waits too long, AI may become embedded in ways that are inconsistent with or fail to serve civic values.

  • A system technology like AI requires a legislative agenda that addresses not only issues associated with the technology and its use, but also its broader societal effects.

  • Mass surveillance, extreme dependency on private actors and power concentration represent threats to civic values in the context of AI’s societal integration, and therefore require urgent government intervention.

3 In Conclusion

In this chapter we have considered the overarching task of government regulation. We perceive that task as having two dimensions associated with the systemic nature of AI. The first is its pervasiveness, which is such that it will require new or adapted regulatory frameworks in many areas.

Generally speaking, we are at a very early point in the process of AI’s societal integration or embedding. It might easily be supposed, therefore, that government should refrain from intervening at this stage and instead monitor developments until ‘the time is right’. Such a policy is undesirable, however, because of the major impact AI is likely to have. If government wishes to retain its capacity for significant and effective intervention now and in the longer term, particularly with a view to safeguarding civic values, it must be vigilant and start preparing now for the more forceful role it will inevitably have to play in due course. Fortunately, this process of preparation is already under way in certain spheres, both in the Netherlands and in the European Union. Against that background it is important that government be aware of the various issues surrounding the regulation of AI. It must also commit to structural investment in the collection and collation of signals regarding the opportunities and risks associated with the societal embedding of AI, otherwise its ability to make appropriate changes or define new rules – or to do so in good time – will be seriously curtailed.

The second conclusion of this chapter is that as AI becomes more deeply embedded in society, government’s regulatory task will inevitably increase. At the same time, the nature of the issues it faces will change as increasing use of AI gives rise to second and third-order problems. It will then be necessary to address problems posed not only by the technology per se, but also by the extent of its use and the scale of its effects. Which in turn will require active management of the wider digital living environment into which AI is ultimately going to be embedded. Precedents for this kind of approach have previously been set in other fields where developments have fundamentally affected and reorganized society. In the period ahead, government must therefore focus on converging currently separate policy portfolios and lines of development so that they are viewed as elements of a more comprehensive design challenge.

It should also be recognized that AI is entering a society where data is already collected on a large scale, where digital products, services and infrastructures are made available largely by private actors and where the leading AI developers occupy a dominant position in the global internet economy. That context is bound to have a major bearing on the way AI is eventually used in society, by whom and for what purposes. If government wishes to retain its ability to influence developments, it must act now. Delay is not only undesirable, it is unnecessary. Numerous research reports, other documents and plans are already available, which government can use to support and guide its interventions. The important thing now is to be energetic in converting those plans into regulatory action.