Keywords

1 Introduction

One of the key figures in modern day engineering, Dean Kamen, believes that “[e]very once in a while, a new technology, an old problem, and a big idea turn into an innovation” (Sorvino 2016). Nowadays people keep testing the limits of technology and creativity, striving to invent the next big thing and change humans’ lives. This highly competitive race to the top is certainly fascinating, but changing lives often leads to some unexpected consequences and could create unexpected risks. One of the primary roles of regulation is risk mitigation. In the words of Prof Karen Yeung, regulation is an “organized attempt to manage risks or behaviour in order to address a collective problem or concern” (Yeung 2017). The problem with regulating disruptive technologies such as AI, however, originates from a combination of the largely unpredictable and dynamic nature of the said technologies and the traditional approach to legislation which is reactionary and too slow to be adopted and amended. Another big issue is technological opacity which highlights the need of involving a variety of people with specific expertise in drafting legislation that could be comprehensive and serve the basic need of any law, namely, to ensure legal certainty (Kaal 2016).

That’s why the long-anticipated White Paper on AI adopted by the Commission in the beginning of 2020 was met with criticismFootnote 1 for not reflecting on the need of novel approach to regulating new technologies, especially when individual member states have already been implementing it, predominantly in the form of regulatory sandboxes.Footnote 2 It was also surprising due to the fact that regulatory sandboxes, in particular, have been pinpointed on a number of occasionsFootnote 3 as a prominent tool to facilitate innovation and promote trust in new technologies and especially AI. The omission of the White Paper, however, was attempted to be remedied through the adoption of draft regulation on lying down harmonised rules on artificial intelligence and amending certain union legislative act (the AI Act). Title V provides first comprehensive glance at what regulatory sandboxes for AI are deemed to be and how they are supposed to be implemented.

This chapter aims to outline the key issues policymakers are facing in their attempts to regulate AI and how those issues are addressed through the introduction of regulatory sandboxes as a tool of a novel emerging type of regulation. In order to achieve this, we are going to first explain the nature of the approach and observe if and how it could be applied to AI technologies in a variety of sectors from financial law to health services and if its multidimensional nature is adequately reflected in the draft AI act. Finally, we are going to identify some challenges this regulatory tool faces and conclude whether it lives up to the expectation of being indeed a breakthrough in regulation.

2 The “Taming” of AI by the Law

As already mentioned, the primary aim of creating laws is to mitigate certain risks arising from objects or relations in society. To illustrate this point, we can look at an object which we are very much familiar with but that was once new and unfamiliar—the car. Its specifics in terms of mechanics and control brought up a number of concerns associated predominantly with people’s lives and health. This led to adoption of legislation setting up rules that every driver needs to comply with for their own safety and the safety of other drivers and pedestrians. Later on, the legislator obtained more information which showed necessity of rules governing mandatory driving licenses and insurance. As the development of automobile industry and design progressed further, it became apparent that manufacturers also need to be regulated in order to ensure that new cars are being produced following certain safety standards. With automobiles becoming the most common means of transportation (European Commission and Eurostat 2000), their impact to the environment and urban spaces became obvious which exposed additional risks, leading to further legislation in an attempt to mitigate them.

This simplified example demonstrates the relationship between technology, risk and law. Why, then, would traditional way of legislation not work on another type of technology such as AI? Firstly, arguably an AI technology is much more complicated and has the potential to affect society in more domains than a car. It is often categorized as a disruptive technologyFootnote 4 and as such it possesses risks that are hard to be predicted. Secondly, AI technologies are much more opaque compared to automobiles. Indeed, an ordinary user may not know exactly the purpose of the many elements composing a car, but someone with sufficient knowledge of mechanics does and hence the predictability, compared to AI that may sometimes act in unexpected ways. Another key difference is the so-called pacing problem of regulation related to AI. The pacing problem is the significant contrast between the pace of AI innovation and innovation of regulatory tools used to govern it (Marchant et al. 2013) Last but not least, in order for legislation to be adequate and to serve its function of risk mitigation, its object needs to be clearly defined. It is important since a well-written and serviceable legal act ultimately needs to cover as broad a range of real-life situations as possible. This is ensured by precise usage of legal terminology and detailed definitions of every term used in the act itself, via references to other acts, through applying the rules of legal interpretation, or through judicial decisions. It also contributes to achieving legal certainty.Footnote 5

Turning back to our example, if a regulator wants to adopt a legal act dealing with certain aspects of motor vehicles, a first step would be to define what a motor vehicle is. Looking at Directive 2007/46/EC for instance, we find the definition straight away in Article 3.Footnote 6 Reading the definition, a reasonable person would easily conclude that her vehicle with 3 wheels is clearly not a motor vehicle within the scope of the Directive and is therefore not subject to its rules, which is of course possible due to existence of a comprehensive legal definition of a motor vehicle. Returning to the problem at hand, it follows that prior to creating any sort of legislation related to AI and subsequently regulating it, there must be a legal definition that is acceptable for serving the purposes of the AI Act but also not contributing to overregulation of the subject.

This definition has been a hot topic for a while not only in the legal field but also in computer science.Footnote 7 This taxonomy issue was highlighted in the discussion that was formed after the High-Level Expert Group on Artificial Intelligence (HLEG) adopted, together with an additional report on the topic, a definition of AI, aiming to “avoid misunderstandings, to achieve a shared common knowledge of AI that can be fruitfully used also by non-AI experts, and to provide useful details that can be used in the discussion on both the AI ethics guidelines and the AI policies recommendations” (High-Level Expert Group on Artificial Intelligence 2019). There was a number of issues that were raised regarding that particular definition ranging from some excluding self-replicating machines from the scope of the definition, through adoption of “created by humans” criterion to having “one size fit all” approach regarding weak and strong AI regardless (Center for Data Innovation 2019).

The aforementioned problems were partially solved through the definition adopted in Article 3(1) of the draft AI Act which covers AI systems and describes them as “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.” This new definition, however, also reveals some weak points, for example, not demonstrating clearly what is the difference between AI and AI systems, not reflecting current standardization efforts in EU and being so broad it practically encompasses “even the simplest search, sorting and routing algorithms” (BDVA/DAIRO 2021).

What is of vital importance for this definition is Annex I which contains particular types of AI techniques and approaches such as reinforced learning, symbolic reasoning, etc. and which can be updated through delegated acts in accordance with Article 4 in conjunction with Article 73 of the draft AI Act. This would allow better reaction time in case of scientific development that has not been covered by the regulation and is intended to address the pacing problem, although it still might not be agile enough considering the time a delegated act still takes to enter into force. In its briefing to the European Parliament, the European Parliamentary Research Service acknowledges the need for better addressing the pace problem and suggests “flexible instruments such as delegated acts, sunset clauses and experimental legislation” (Kritikos 2019).

These new regulatory tools are not a novelty and they emerged long before regulating AI became a task at hand (Ranchordás 2014). They have many names and are often used in various combinations but they do have several things in common: they are more dynamic compared to traditional legislation, allow participation of broader circle of stakeholders and provide valuable feedback to the regulator allowing better understanding of the object that needs regulation and the risks and benefits it involves. Undoubtedly, one of the tools that generated the most hype are regulatory sandboxes which are going to be examined in the following sections.

3 Playing in the Sand

The term ‘regulatory sandbox’ sometimes creates confusion. It is rather similar to the notion of sandbox environment in computer science. Despite the similarities, however, the two terms are not equivalent. A sandbox is a testing tool, while a regulatory sandbox is a regulatory tool and a process which regulates different risks compared to its namesake (Yordanova 2019).

The financial sphere was the first area in which regulatory sandboxes were tested. The financial crisis of 2008 resulted in a major global crisis of regulation (Armstrong et al. 2019). The financial sphere has always been very much affected and evolved with the evolution of technologies, often referred to as FinTech.Footnote 8 Several periods of evolution of FinTech have been identified, starting with the use of the telegraph to reach to what is nowadays considered FinTech 3.0 (Arner et al. 2016). It is characterized by the use of rapid developing technologies often leading to inclusion of new actors in addition to traditional financial product/service providers or automation of processes that may have unexpected and undesired consequences, for example, algorithmic bias leading to discrimination. The variety of ways disruptive technologies could be utilised for the purpose of FinTech creates the necessity of some form of regulation. On the other hand, overregulating innovation just to be safe every possible risk scenario is covered may hinder innovation since developing technologies in accordance with the corresponding legal requirements would me time-consuming, costly and involving increased liability. Therefore, innovators may ‘shop’ for jurisdiction that is less prompt to regulate the financial sector.

These concerns demonstrated the need of a new approach to regulation that would position the regulator as a partner and a guide rather than an enemy for companies willing to innovate. In 2014, the UK Financial Conduct Authority (FCA) started Project Innovate and officially created the first regulatory sandbox. In October 2017, FCA published its “Regulatory sandbox lessons learned report” (UK Financial Conduct Authority 2017) which assessed positively the result of the regulatory sandbox application. This has led to the establishment of a growing number sandboxes under different financial jurisdictions and attempts to transfer the use of the regulation tool to other sectors such as data protection (UK Information Commissioner’s Office 2019) or aviation (Civil Aviation Authority 2019).

The potential of regulatory sandboxes for regulating disruptive technologies and especially AI has already been recognised. A number of states such as Finland (Ministry of Economic Affairs and Employment of Finland and Steering Group of the Artificial Intelligence Programme 2017) include the use of sandboxes as a means to build a comprehensive legal framework for AI. The trend is supported by the EU which sees regulatory sandboxes as innovation facilitators (ESMA 2019) and recognizes them as important tool in future regulation activities regarding AI (European Commission 2018).

This trend was further reinforced via including regulatory sandboxes in the European Commission’s Better Regulation ToolboxFootnote 9 and instrumentalizing it in such further initiatives like the future pan-European blockchain sandbox (Council of the EU 2020) and the draft AI Act as a way to both promote innovation and support SMEs (European Commission 2020). At the same time other jurisdiction outside the EU are already implementing regulatory sandboxes for testing AI-based products, services and business models either through specific AI-dedicated sandboxesFootnote 10 or under the framework of another type, for instance, in the area of finances or healthcare.Footnote 11

In this context in order to better understand the nature and the process behind regulatory sandboxes, it is only logical to look at those applied in the filed of FinTech due to their number, geographical distribution and the fact it was the sphere where regulatory sandboxes first appeared in.

Granted there is no universal definition of the term, the European Securities and Markets Authority regards regulatory sandboxes as “schemes to enable firms to test, pursuant to a specific testing plan agreed and monitored by a dedicated function of the competent authority, innovative financial products, financial services or business models” (ESMA 2019). The definition highlights several points. Regulatory sandboxes are deemed essentially a testing ground for innovation products, services or business models where their potential risk is mitigated but also where the relevant supervisor may provide certain leeway from the general rules for the purpose of the testing.

On the other hand, the Council of the EU came up with a slightly different definition, presenting regulatory sandboxes as.

concrete frameworks which, by providing a structured context for experimentation, enable where appropriate in a real-world environment the testing of innovative technologies, products, services or approaches – at the moment especially in the context of digitalisation – for a limited time and in a limited part of a sector or area under regulatory supervision ensuring that appropriate safeguards are in place (Council of the EU 2020).

There are already some differences between the first two definitions, the second one being broader and encompassing various sectors but also putting emphasis on the need for appropriate safeguards during the testing period. Then the draft AI Act provides a further definition for specific AI regulatory sandboxes in its Article 53(1). It is envisioned that AI regulatory sandboxes are.

established by one or more Member States competent authorities or the European Data Protection Supervisor shall provide a controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan. This shall take place under the direct supervision and guidance by the competent authorities with a view to ensuring compliance with the requirements of this Regulation and, where relevant, other Union and Member States legislation supervised within the sandbox (European Commission 2021).

This specific definition provides some additional and novel elements. First of all, it explicitly emphasizes the possibility of a multi-jurisdictional regulatory sandboxes. The feasibility of this type of sandboxes had been questioned before we even started talking about specific AI sandboxes. It was argued that “the fact that the service lacks the standardization associated with regulation makes the sandboxed activity unfit for cross-border provision of services” (Zetzsche et al. 2017). It is yet to be found out how this barrier could be overcome. Furthermore, the scope of the regulatory sandboxes for AI is significantly broadened, encompassing development, testing and validation and therefore combining the traditional function of a regulatory sandbox with those of other tools such as testing and pilots. It is important to note that there is an existing debate on the exact relation between the terminology used to describe these defined ‘safe spaces’ for testing innovation with or without certain authorities being involved. What is agreed on is that “there is an inherent connection between a regulatory sandbox on the one side, and testing and piloting on the other” (Zetzsche et al. 2017) and also that usually jurisdictions “with a sandbox approach put certain piloting and testing activities inside the sandbox since this is more convenient” (Zetzsche et al. 2017). This probably contributes to the spawning of numerous other terms, for example ‘living labs’, ‘regulatory testbeds’, etc., which are used as synonyms and ultimately addressing “areas in which to trial innovation and regulation” (Federal Ministry for Economic Affairs and Energy 2019). Nevertheless, the definition of the draft AI Act seems to incorporate certain testing and piloting elementsFootnote 12 in addition to the regular sandbox activities, which could be a beneficial element only if it really ‘facilitates’ the development of innovation and ultimately reduces the ‘time to market’ which has been the primary goal of the tool to begin with (Ringe and Ruof 2018).

In question about the manner and the degree of the facilitation element of the regulatory sandboxes for AI as envisioned by the European Commission, however, remains open. Going back to the original source and examining the already existing examples of sandboxes for FinTech we can deduct several key elements for a successful sandbox creation and operationalization. First, the sandbox operates for a limited amount of time and under certain test parameters, allowing a pre-determined number of participants. In the interest of transparency and fairness, the sandbox’s entry requirements need to be clearly defined and publicly available. They may vary from jurisdiction to jurisdiction, but the most common ones are genuine innovation,Footnote 13 consumer benefit and need for testing within the sandbox (UK Financial Conduct Authority 2017). In general, the sandbox is not limited to just SMEs, although some jurisdictions decide to deny entry to regulated entities, supporting only unlicensed companies, which are mostly SMEs (Ringe and Ruof 2018).

On a second place, after a company has been accepted into the sandbox, usually a case officer is appointed to its case in order to provide regulatory expertise and assess if sandbox tools to facilitate the testing are needed in the particular case (UK Financial Conduct Authority 2017; Mangano 2018). The sandbox tools are numerous and offer a wide range of possibilities from the ‘never say no’ approach applied by the Monetary Authority of Singapore (Agarwal 2018) to the comfort from enforcement and letters of negative insurance on exit offered by FCA. Naturally, when a sandbox is created in a domain that is heavily regulated by EU law, the matter of leeway becomes more complicated due to the fact that the national regulator cannot provide any exemptions from the rules established by the European Union (Ringe and Ruof 2018). It needs, however, to set parameters for the testing phase, for example, restriction on disclosure, limited number of clients to use the product, service or business model, etc.

A third vital element is guaranteeing sufficient customer protection during the test as one of the most important tasks for the regulator, and especially for regulatory sandboxes testing technologies that may put individuals’ rights to a significant risk, such as innovation in the area of healthcare. The means to achieve this are dependent on the particular case, but probably the most common are clear and detailed communication about the nature of the test, allowing consumers to make an informed decision on every topic related to the tested product or service, combined with testing parameters that mitigate risks such as testing being limited to non-retail clients. The companies should also ensure compensation or another redress measures in case of harm suffered in context of the test (ESMA 2019).

After the preparation phase has finished, providing answers and solutions to all of the questions discussed above, the testing phase begins. It encompasses constant communication between the company and the regulator. One could say that this phase most completely illustrates the symbiotic nature of the regulatory sandbox. The regulator does fulfil its role of monitoring the test and ensures compatibility with the necessary standards, but it also observes the tested technology and realizes better how it works, its potential risks, and which approach is better to mitigate them.

Finally, the last phase and element is the evaluation phase that requires submission of the final report to the authority, following predetermined parameters during the preparation phase, and assessment of the success of the test.

These common elements are justified by the available data showing high performance results and relatively few unsuccessful tests (UK Financial Conduct Authority 2017). The numerous advantages of the regulatory sandbox model, however, are what make it an increasingly popular choice for regulating disruptive technologies, especially AI. Firstly, a regulatory sandbox demonstrates the regulator’s willingness to facilitate and stimulate innovation which is a sign of good business opportunities in the respective state. Secondly, an increase in the level of knowledge for both the participants and the regulator is clearly noticeable. This enables regulators to better perform their functions and gain vital insight of emerging technologies, making them less reliable to outside expertise (Scherer 2016). Furthermore, the time to market is reduced, combined with the assurance that the new products/ services have all the appropriate safety standards built-in. It also allows innovators to have early warning about possible problematic features of their product/ service, as well as the assurance that they would not break any existing regulatory requirements during the test phase.

Would that be enough to draw the conclusion regulatory sandbox is the most appropriate tool in regulating AI? The answer is not so simple. Despite the enthusiasm demonstrated by States and international organizations in creating and applying regulatory sandboxes, there are some challenges that need to be addressed and assessed. Some of the challenges are common for all kinds of regulatory sandboxes: the lack of a complete regulatory framework for a certain product/ service might seem too risky for consumers to engage in the testing; it also means that there would be lack of standardization. Standardization is important due to its implications to cross-border implementation of the products/services. Furthermore, in some cases the risk of an innovation might not be significant enough to require regulation of any kind. In such cases it would simply hinder innovation (Zetzsche et al. 2017). In other cases, the innovation might not be mature enough to be tested in a sandbox, thus a wait-and-see approach could be more suitable (Jenik and Lauer 2017). It is also true that the limited number of participants in the sandbox may not provide the necessary representative sample needed to fully determine the effect of a certain technology. The companies themselves might not be too willing to participate either because they want to grow faster, and the sandbox would limit this ability (Zetzsche et al. 2017) or because they are not stimulated enough by the leeway offered by the regulator, especially in the European context where the sandbox tools are much more conservative compared to other jurisdiction. Last but not least, the participating companies might be reluctant to participate in an environment where potentially trade secrets could be discovered by the competition.

Another category of challenges is related specifically to regulatory sandboxes for AI. The Coordinated Plan on AI stipulates that the envisioned testing facilities for AI “may include regulatory sandboxes…in selected areas where the law provides regulatory authorities with sufficient leeway.” It is rather confusing due to the fact that until now regulatory sandboxes have been created not to test a specific technology, but innovations in a particular field, for example, in the financial sphere. This approach does not limit the technologies that are used but their purpose and application. To illustrate our point, there was an AI-based solution being tested in the FCA’s regulatory sandbox as part of its 5th cohort.Footnote 14 Its purpose is to help SMEs applying for loans by using AI to increase effectiveness of credit scoring and improve the risk assessment simultaneously reducing costs.

Secondly, the level of effect an AI technology could have might not fall only under the scope of one regulator. For example, an AI technology might be intended to be used only in the banking sector and thus being regulated by the financial authority but at the same time it may occur that it has significant implications to personal data and thus assistance from the data protection authority must be provided. This is problematic from both organisational and administrative point of view. Regulators usually do not have experience in coordinated with each other on such matters which would lead to chaos and inefficiency (Ausloos et al. 2018). It also worth noting that an AI technology is designed to learn, hence, to change. This would mean that an AI technology, exiting the sandbox labelled as compliant, might not be compliant for a long time. Such turn of events might undermine the whole process and ultimately legal certainty (Yordanova 2019).

4 The Way Forward Through the AI Act

The inclusion of the regulatory sandboxes for AI in the draft AI Act signifies the EU’s new approach to regulating disruptive technologies but we need to take into consideration all the challenges outlined in the previous section. It is evident that despite the many novel opportunities and advantages offered by regulatory sandboxes compared to traditional (reactive) way of regulation and governance, they are not a panacea but a tool and a building block to a new approach to regulation of a data-driven society.

Nesta has carried out detailed research on the features this new regulation model needs to possess and its key characteristics (Armstrong et al. 2019). Building on the work of Geoff Mulgan (Mulgan 2017) and his analysis on elements of emerging regulatory tool kit, Nesta outlines six principles of the new anticipatory regulation which it should possess in contrast to the traditional reactive approach.

Anticipatory regulation needs to be inclusive and collaborative, engaging the public and a variety of stakeholders which also ensures better democratic legitimacy of this kind of regulation. It should also be future-facingFootnote 15 and proactive.Footnote 16 The next principle is iterative, described as “taking a test-and-evolve rather than solve-and-leave approach to novel problems” (Armstrong et al. 2019). The last two principles are outcome-based and experimental nature. They both show the much more pragmatic and solution-oriented character of the new approach to legislation.

Following these principles, regulators could find the best regulatory tool or a combination of tools for their particular needs. In addition, a regulator should not be hesitant to combine all the opportunities provided by anticipatory regulation with tools and approaches from other modes of regulation such as advisory or adaptive regulations (Armstrong and Rae 2017).Footnote 17 It is important to stress that this classification is just an example of a system that is suitable to deal with emerging technologies and AI in particular. We can also consider regulation depending on whether it is based on principles, risks, the market or reliance on internal management (Black 2010).

Despite the chosen classification, the ultimate challenge before regulators remains the mitigation of risk in high-risk AI technologies (Guihot et al. 2017). Regulatory sandboxes certainly offer such a capability, but the result does depend on the nature of the risk and its level. A rather more conservative tool is implementation of sunset clauses in regulation (Vermeulen et al. 2017), although it would not offer the same amount of feedback as a sandbox. Another tool that is certainly looked at is standardization. A proposed solution is AI systems certified as safe to enjoy limited tort liability compared to uncertified ones (Scherer 2016).

These are just some examples of the palette of tools a regulator has at its disposal when regulating AI. The choice of one or another, or even a combination between several should be as customised as possible and supported by constant efforts in improving the regulators’ capacities by providing them with best practices and skill building (Armstrong et al. 2019) especially in the light of the interdisciplinary nature of the AI research (Moses 2011).

The draft AI Act set the scene for regulatory sandboxes to be the centre of attention as the ultimate innovation facilitator. This approach raises at least three different groups of concerns. Firstly, the issues we discussed in the current previous sections regarding regulatory sandboxes in general and those specifically dedicated to AI have not been solved in the current version of the provisions in Article 53. Secondly, the design of the regulatory sandboxes for AI, as described by the text of the regulation, does not seem to provide many incentives for joining such sandboxes. One of the elements that usually attracts the most innovative products/services, namely the weaver of certain rules, is not touched upon. It is vital to be “clear whether Member State authorities will be able to offer regulatory waivers or other types of regulatory arrangements for AI experiments” (Ranchordas 2021). Furthermore, national regulators might not be able to offer any significant waiver due to the fact that they would not have had such competences regarding EU law provisions (Ringe and Ruof 2018).

The only possible waiver of rules we currently know about stems from the text of Article 54 and concerns personal data protection rules. Indeed, participants in the regulatory sandboxes for AI would be able to process personal data, “lawfully collected for other purposes” in order to develop and test “certain innovative AI systems in the sandbox”. This exception to personal data protection rules, however, is subject to a rather large number of cumulative conditions acting as guarantee towards the individuals’ rights and freedoms. The conditions include the purpose of the innovative AI systems (“safeguarding substantial public interest” in one or more of predetermined areas such as public health for instance), the data being necessary for complying with the high-risk AI systems’ requirements, the existence of effective monitoring system for identification of arising of high risks to the fundamental rights of the data subjects. Furthermore, “any personal data to be processed in the context of the sandbox are in a functionally separate, isolated and protected data processing environment” and should not be “transmitted, transferred or otherwise accessed by other parties”. In addition, any processing of personal data shall not lead to measures or decisions affecting the data subjects, needs to be deleted “once the participation in the sandbox has terminated or the personal data has reached the end of its retention period” subject to logs of processing also having retention period and purpose limitation. There are also strict transparency requirements in the form of “complete and detailed description of the process and rationale behind the training, testing and validation of the AI system” and “short summary of the AI project developed in the sandbox, its objectives and expected results published on the website of the competent authorities.” The burden of satisfying these requirements appear to significantly outweigh the advantages of the waiver offered in the context of the sandbox.

The third group of concerns is related to the lack of clearance on the scalability of the regulatory sandboxes for AI as well as their place in the system of tools for anticipatory regulation. From a practical perspective adopting a smart mix of tools for facilitating innovation is considered the best solution, in which regulatory sandboxes are just one piece of the puzzle. For example, in 2019 UK Civil Aviation Authority (CAA) launched an Innovation Hub and a regulatory sandbox specifically targeting AI innovations (UK Civil Aviation Authority 2021). This combination between innovation hubs and regulatory sandboxes has already been considered as a way to solve some of the scalability issues of the sandboxes (ESMA 2019; European Commission 2020). Furthermore, combining different tools of anticipatory regulation is deemed highly beneficial for the further establishment and development of the Innovation principle (Renda and Simonelli 2019). Additionally, the Council of the EU has already connected regulatory sandboxes with experimental clauses, understood as “legal provisions which enable the authorities tasked with implementing and enforcing the legislation to exercise on a case-by-case basis a degree of flexibility in relation to testing innovative technologies, products, services or approaches” (Council of the EU 2020). This relation, however, is currently lacking from the draft AI act where the role of the regulatory sandboxes as contributing to the creation of evidence-based policy making is currently not present.

5 Conclusion

In 1996 Richard Susskind expresses the opinion that “we are on the brink of a shift in [the] legal paradigm, a revolution in law” (Susskind 1996). Events such as the global financial crisis from 2008 and the US elections in 2016 proves that we need a radically new approach to the world and the way we regulate it. This approach is still under development, and we are far from completing the transition from reactive regulation to anticipatory one.

Regulatory sandboxes are certainly a step forward. They provide a relative safety and degree of control, helping regulators to better understand AI and other disruptive technologies before deciding if and how they should be regulated. After all, “regulation is a mere tool. Where helpful for society, it must be used, where not it is best removed” (Zetzsche et al. 2017).

There are also many questions regarding how to overcome the challenges outlined in the present chapter and what evolution regulatory sandboxes are going to go through in order to stay relevant and answer the needs of the society and dynamic nature of AI. There are already some ideas about Sandboxes 2.0, incorporating access to innovative finding methods (Ringe and Ruof 2018) and guided sandboxes, attempting to resolve the conflict of national and supranational/federal levels with respect to their interaction in a regulatory sandbox.

Currently, there are more questions than answers on how to regulate AI. Global powers outline different approaches in an attempt to become the most attractive investment destination, but ultimately the most successful would be the one that can stabilise the ‘shifting sands’ of regulatory sandboxes and use them as a cornerstone for building a new way of regulation. Footnote 18