Skip to main content

The AI Act’s Research Exemption: A Mechanism for Regulatory Arbitrage?

  • Chapter
  • First Online:
YSEC Yearbook of Socio-Economic Constitutions 2023

Part of the book series: YSEC Yearbook of Socio-Economic Constitutions ((YSEC,volume 2023))

  • 104 Accesses

Abstract

This paper argues that by failing to acknowledge the complexity of modern research practices that are shifting from a single discipline to multiple disciplines involving many entities, some public, some private, the proposed AI Act creates mechanisms for regulatory arbitrage. The article begins with a semantic analysis of the concept of research from a legal perspective. It then explains how the proposed AI Act addresses the concept of research by examining the research exemption that is set forward in the forthcoming law as it currently exists. After providing an overview of the proposed law, the paper explores the research exemption to highlight whether there are any gaps, ambiguities, or contradictions in the law that may be exploited by either public or private actors seeking to use the exemption as a shield to avoid compliance with duties imposed under the law.

To address whether the research exemption reflects a coherent legal rule, it is considered from five different perspectives. The paper begins by examining the extent to which the research exemption applies to private or commercial entities that may not pursue research in a benevolent manner to solve societal problems, but nevertheless contribute to innovation and economic growth within the EU. Next, the paper explores how the exemption applies to research that takes place within academia but is on the path to commercialization. The paper goes on to consider the situation where academic researchers invoke the exemption and then go on to provide the AI they develop to their employing institutions or other public bodies for no cost. Fourth, the paper inspects how the exemption functions when researchers build high-risk or prohibited AI, publish their findings, or share them via an open-source platform, and other actors copy the AI. Finally, the paper considers how the exemption applies to research that takes place “in the wild” or in regulatory sandboxes.

This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program—Humanities and Society (WASP-HS) funded by the Marianne and Marcus Wallenberg Foundation and the Marcus and Amalia Wallenberg Foundation. Additionally, I would like to express my gratitude to Associate Professor John Gerard Dinsmore for inspiring me to write this paper and providing helpful comments on the manuscript, or parts thereof, at various stages in its development. I would also like to thank the blind peer reviewers for their extremely helpful feedback. As far as shortcomings are concerned, they are all attributable to the author.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Quinn (2021), p. 1.

  2. 2.

    Id. at p. 22 (explaining, “Whilst the traditional image of research is that of a university research group or university hospital, the reality is that research has always and will always be conducted by a variety of actors. A large range of private entities, varying from small organisations (sic) to large and powerful tech and social media giants are continuously engaged in research. Whilst the motives of such research may often be more commercial in nature, it is nonetheless indispensable for innovation and economic growth.”); see also European Data Protection Supervisor (2020), p. 7 (“Research is clearly no longer the preserve of academia, if indeed it ever was. The interface, including the exchange of data, between research organisations and the wider research ecosystem is highly complex. Scientific publishers, designers and developers, entrepreneurs, commercial, governmental and non-profit funding sources in the commercial, governmental and non-profit sectors all have a stake. Also playing a role are big data analytics firms and cloud service providers, and platforms and apps whose business models involve the accumulation and monetisation of as much data as possible.”).

  3. 3.

    Quinn (2021), p. 24.

  4. 4.

    European Data Protection Supervisor (2020), p. 6.

  5. 5.

    Id. at 5 (“The distinction between, on the one hand, genuine research for the common good and, on the other, research which serves primarily private or commercial ends, has become ever more blurred.”).

  6. 6.

    See Willesson (2017); see also Partnoy (1997), p. 227 (defining “Regulatory Arbitrage” as the process of designing transactions or business practices “specifically to reduce costs or capture profit opportunities created by differential regulation or laws.”); see generally Fleischer (2010).

  7. 7.

    European Parliamentary Research Service (2022).

  8. 8.

    For more see García (2019), p. 203; see also Burk (2016), pp. 15–16; Cohen (2017), p. 184.

  9. 9.

    Allen (2020), p. 309.

  10. 10.

    Marelli et al. (2021).

  11. 11.

    European Data Protection Supervisor (2020).

  12. 12.

    But c.f. Konečná et al. (2022) (proposing to define “scientific research and development” as “any scientific development, experimentation, analysis, testing or validation carried out under controlled conditions.”).

  13. 13.

    Oxford Advanced Learner’s Dictionary, Research (n.d.).

  14. 14.

    Booth et al. (2016), pp. 51–65.

  15. 15.

    Kelly et al. (2014), p. 228 citing “What Is Peer Review?” (2014). Int J Comput Appl. Web. Retrieved July 02, 2014, from http://www.ijcaonline.org/peer-review.

  16. 16.

    Spector et al. (2012).

  17. 17.

    See Comandè and Schneider (2022), p. 568; Richter (2018), p. 53; Slokenberga (2022), p. 139 (stating, “(the) borders of the scientific research regime – and consequently, possibilities to take advantage of the rules set forth therein – remain undefined.”).

  18. 18.

    Directive (EU) 2019/790.

  19. 19.

    Id. recital 12.

  20. 20.

    Id. recital 12, Article 2 (1) (defining “‘research organization’ as a university, including its libraries, a research institute or any other entity, the primary goal of which is to conduct scientific research or to carry out educational activities involving also the conduct of scientific research: (a) on a not-for-profit basis or by reinvesting all the profits in its scientific research; or (b) pursuant to a public interest mission recognised by a Member State; in such a way that the access to the results generated by such scientific research cannot be enjoyed on a preferential.”).

  21. 21.

    Directive (EU) 2019/790, recital 12; see also Comandè and Schneider (2022), pp. 567–568 (explaining “… contrary to the Recommendation or the Open Data Directive, the Copyright Directive appears to implicitly draw a distinction between not-for-profit and public interest-oriented research entities, on the one hand, and organizations operating for commercial purposes on the other.”).

  22. 22.

    Proposal for a Regulation of the European Parliament and of the Council on Harmonised Rules on Fair Access to and Use of Data (Data Act) COM(2022) 68 final, 23 February 2022 (Data Act), Recital (68), Article 21.

  23. 23.

    Buttarelli (2018) (stating “While the GDPR does not provide a formal definition, it embraces a wide view of what scientific research is. It covers ‘technological development, fundamental research, applied research and privately funded research.’”).

  24. 24.

    GDPR, Recital 159.

  25. 25.

    Id.

  26. 26.

    Id.

  27. 27.

    C-215/88 Casa Fleischhandel ECLI:EU:C:1989:331, para 31 (stating “whilst a recital in the preamble to a regulation may cast light on the interpretation to be given to a legal rule, it cannot in itself constitute such a rule.”).

  28. 28.

    European Data Protection Board (2020), para 153.

  29. 29.

    Id.

  30. 30.

    Id.; see also Article 29 Working Party (2020), para 153, p. 130.

  31. 31.

    Marelli et al. (2021).

  32. 32.

    Id.

  33. 33.

    GDPR, Recital 159 (referring to Article 179(1) of the Treaty on the Functioning of the EU); Schneider (2019), p. 268.

  34. 34.

    European Data Protection Supervisor (2020), p. 10.

  35. 35.

    Id. at 12.

  36. 36.

    Id.

  37. 37.

    The UK Information Commissioner (2018), p. 11 (explaining, “The purpose of scientific or historical research is to produce new knowledge or to apply existing knowledge in novel ways, often with the aim of benefiting the public interest. Scientific or historical research aims to: advance the state of the art in a given field or provide innovative solutions to human problems; generate new understandings or insights that add to the sum of human knowledge in a particular area; or produce findings of general application that can be tested and replicated.”).

  38. 38.

    de Vries (2022), p. 122.

  39. 39.

    Marelli et al. (2021).

  40. 40.

    European Commission (2016); see also European Commission (2020).

  41. 41.

    European Commission (2020).

  42. 42.

    Commission Recommendation (EU) 2018/790.

  43. 43.

    Directive (EU) 2019/1024, Recital 27.

  44. 44.

    Id., Article 2(9) (stating that “research data” means “documents in a digital form, other than scientific publications, which are collected or produced in the course of scientific research activities and are used as evidence in the research process, or are commonly accepted in the research community as necessary to validate research findings and results.”).

  45. 45.

    Id., Article 2(11) (stating that “re-use” means “the use by persons or legal entities of documents held by: (a) public sector bodies, for commercial or non-commercial purposes other than the initial purpose within the public task for which the documents were produced, except for the exchange of documents between public sector bodies purely in pursuit of their public tasks.”).

  46. 46.

    Proposed Data Governance Act, recital 35.

  47. 47.

    Proposed Data Act.

  48. 48.

    Proposed Regulation on the European Health Data Space.

  49. 49.

    European Commission (2022).

  50. 50.

    Comandè and Schneider (2022), p. 567 (further distinguishes different categories of data “1) public data employed for public interest-related research purposes, 2) public data employed for commercial-related research purposes, 3) private data employed for public-interest related research purposes, and ultimately, 4) private data employed for commercial-related research and innovation purposes.”).

  51. 51.

    The Swedish Ethical Review Act, para 2 (“forskning: vetenskapligt experimentellt eller teoretiskt arbete eller vetenskapliga studier genom observation, om arbetet eller studierna görs för att hämta in ny kunskap, och utvecklingsarbete på vetenskaplig grund, dock inte sådant arbete eller sådana studier som utförs endast inom ramen för högskoleutbildning på grundnivå eller på avancerad nivå.”).

  52. 52.

    See, e.g., Regeringens proposition 2018/19:165 (n.d.), p. 9 (“Att forska på människor utan etikprövningstillstånd eller i strid med ett sådant tillstånd kan innefatta forskningsmetoder som medför risk för att en forskningsperson kommer till skada eller att någons personliga integritet kränks.”).

  53. 53.

    Regeringens proposition 2018/19:165 (n.d.), p. 34 (“Kungl. Vetenskapsakademien påpekar att vanliga kännetecken på forskning är förekomsten av vetenskaplig frågeställning, systematiskt arbetssätt, transparent metodik, avsikt att publiceras i vetenskaplig tidskrift och finansiering med forskningsanslag.”).

  54. 54.

    Floridi et al. (2019), p. 361.

  55. 55.

    Id.

  56. 56.

    Marelli et al. (2021).

  57. 57.

    Id.

  58. 58.

    Comandè and Schneider (2022), p. 568; Richter (2018), p. 53.

  59. 59.

    TFEU, Article 16(1).

  60. 60.

    Commission proposal AI Act, explanatory memo section 2.1; see also, Commission proposal AI Act, explanatory memo section 1.1 (explaining that the Commission has four goals, namely to “ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values; ensure legal certainty to facilitate investment and innovation in AI; enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems; facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.”).

  61. 61.

    European Parliamentary Research Service (2023), p. 3.

  62. 62.

    Smuha et al. (2021), pp. 15–16.

  63. 63.

    Commission proposal AI Act, Article 2(1)(a).

  64. 64.

    Id at Article 3(2).

  65. 65.

    Id at 3(9), see further article 3(10).

  66. 66.

    Id at 3(11).

  67. 67.

    Commission proposal AI Act, 3(10) (defining the concept of “making available on the market” as “any supply of an AI system for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge.”).

  68. 68.

    Regulation (EU) 2017/745 on medical devices, Article 1 (the MDR also applies “to clinical investigations concerning such medical devices and accessories conducted in the Union.”).

  69. 69.

    Engler and Renda (2022), p. 4.

  70. 70.

    Id.

  71. 71.

    Id.; Commission proposal AI Act, Article 3(5)–(8).

  72. 72.

    Smuha et al. (2021), p. 16.

  73. 73.

    29 November 2021 Council Compromise text, Article 2(6), Article 2(7).

  74. 74.

    29 November 2021 Council Compromise text, Article 2(6) reads: “This Regulation shall not apply to AI systems, including their output, specifically developed and put into service for the sole purpose of scientific research and development.”

  75. 75.

    29 November 2021 Council Compromise text, Article 2(7) reads: “This Regulation shall not affect any research and development activity regarding AI systems in so far as such activity does not lead to or entail placing an AI system on the market or putting it into service.”

  76. 76.

    29 November 2021 Council Compromise text, recital 12(a).

  77. 77.

    29 November 2021 Council Compromise text, recital 12(a).

  78. 78.

    Fourth Presidency compromise text, recital 12(b).

  79. 79.

    Council General Approach.

  80. 80.

    Council General Approach, Article 3 (2) (“provider” means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed and places that system on the market or puts it into service under its own name or trademark, whether for payment or free of charge).

  81. 81.

    See Smuha et al. (2021), p. 21 (stating, “While an academic researcher may not be a ‘provider’ who places an AI system on the market in the course of a commercial activity (Article 3(10)), they might be accurately described as “a natural person” who “develops an AI system with a view to” supplying it “for first use directly to the user or for own use” “free of charge.” The text of the Proposal seems to suggest that in such a case, the proposed Regulation would apply to academic researchers.”).

  82. 82.

    Parliament General Approach, Article 2(5)(d); see also Recital 2(f).

  83. 83.

    Id.

  84. 84.

    Quinn (2021), p. 24.

  85. 85.

    European Data Protection Supervisor (2020), p. 7.

  86. 86.

    Schneider (2019), p. 256.

  87. 87.

    Id. at 270.

  88. 88.

    Commission proposal AI Act, Recital 16.

  89. 89.

    29 November 2021 Council Compromise text, recital 12(a).

  90. 90.

    See Council General Approach, recital 12(a).

  91. 91.

    Slokenberga (2022), p. 140 (explaining, “While ethical approval has long been an essential feature in biomedical research, not necessarily all countries have adopted that as a requirement for purely data-driven research” and comparing approaches in Sweden and Latvia).

  92. 92.

    Marelli et al. (2021); see also Brightman et al. (2019); see also European Commission (2021a), p. 13 (stating, “Authorities do not have the powers, procedural frameworks and resources to ensure and monitor AI development and use complies with applicable rules.”).

  93. 93.

    Marelli et al. (2021).

  94. 94.

    Id.

  95. 95.

    Id.

  96. 96.

    European Data Protection Supervisor (2020), p. 7.

  97. 97.

    Quinn (2021), p. 30.

  98. 98.

    Id.

  99. 99.

    European Data Protection Supervisor (2020), p. 38.

  100. 100.

    Caulfield et al. (2012), p. 1.

  101. 101.

    Id.

  102. 102.

    Quinn (2021), p. 21.

  103. 103.

    Ienca and Malgieri (2022), p. 13.

  104. 104.

    Id.

  105. 105.

    Parliament General Approach Recital 72(b) and Article 54(a).

  106. 106.

    For an excellent and complete discussion of these cases in relation to the research exemption in the GDPR, see Schneider (2019), pp. 261–266.

  107. 107.

    See Bogucki (2022), p. 28 (stating, “There is a need to ensure that the AI Act does not undermine R&D around AI systems. This regulation should apply only when scientific projects are expected to lead to or entail placing an AI system on the market. AI systems developed for the sole purpose of scientific research should not be constrained by the same regulatory framework. Yet, the exemption needs to be drafted so as to prevent it being used a means to circumvent AI Act requirements by structuring work as R&D.”).

  108. 108.

    The United Nations Environment Programme (1989) (explaining “Product-oriented research is the exercise of evaluating the nature of the market for a particular product, the relative merits of different marketing strategies for that product and so forth.”).

  109. 109.

    Kazim et al. (2022), p. 386 (stating, “…the determination of which AI systems are for the “sole purpose of scientific research and development” is not an easy task.”).

  110. 110.

    de Vries (2022), p. 121.

  111. 111.

    Quinn (2021), p. 22.

  112. 112.

    Shrourou (2020).

  113. 113.

    Slokenberga (2021), p. 21.

  114. 114.

    Caulfield et al. (2012), p. 2.

  115. 115.

    Id.

  116. 116.

    Ake-Kob et al. (2022), p. 9.

  117. 117.

    Abramo et al. (2021), p. 2 (stating, “The ability of industry to exploit the results of academic research is a distinctive competence of advanced economies.”).

  118. 118.

    Shrourou (2020).

  119. 119.

    Id.

  120. 120.

    Id.

  121. 121.

    Letourneur et al. (2020), p. 1.

  122. 122.

    Id. at abstract.

  123. 123.

    This idea came from Associate Professor John Dismore at Trinity College Dublin during our discussions on this paper.

  124. 124.

    Enterprise Ireland (n.d.).

  125. 125.

    Id.

  126. 126.

    Id.

  127. 127.

    Id.

  128. 128.

    Letourneur et al. (2020), p. 2.

  129. 129.

    Shrourou (2020).

  130. 130.

    Zawacki-Richter et al. (2019), p. 11 (explaining “The basis for many AI applications are learner models or profiles that allow prediction, for example of the likelihood of a student dropping out of a course or being admitted to a programme, in order to offer timely support or to provide feedback and guidance in content related matters throughout the learning process. Classification, modelling and prediction are an essential part of educational data mining.”).

  131. 131.

    Id.

  132. 132.

    Quinn (2021), p. 23.

  133. 133.

    Parlak and Dogan (2022), p. 6.

  134. 134.

    See also Kiseleva (2021), p. 3 (stating, “It seems that the concept of the ‘user’ suggested in the proposed regulation has a dual character and applies both to organizations applying AI systems and natural persons doing so inside the organization. In this case, the roles and obligations of these subjects have to be clearly distinguished. Otherwise, their proper accountability can be difficult to ensure (internal citation omitted);” see further, Herrmann and Pfeiffer (2022), p. 1523 (emphasizing institutional perspectives and arguing that the binary of “human and technology” does not consider “the use of technology and the decisions generated in this interplay of humans and technology are embedded in human organizations.”).

  135. 135.

    Commission proposal AI Act, Article 3(4).

  136. 136.

    Colonna (2022), pp. 40–41.

  137. 137.

    European Ombudsman (2021), p. 8.

  138. 138.

    Id. (stating, “The EU administration can be both a provider and a user of AI systems. If EU institutions, bodies, offices and agencies are developing the systems in-house (not ‘buying them off the shelf as a finalised product’ from the market), the EU administration will be considered ‘provider’ when putting those systems into service for its own use.”).

  139. 139.

    Id.

  140. 140.

    Id.

  141. 141.

    Osborne Clarke (2021).

  142. 142.

    See Engler and Renda (2022).

  143. 143.

    See European Parliament resolution (2020) (defining “backend operator” as “any natural or legal person who, on a continuous basis, defines the features of the technology and provides data and an essential backend support service and therefore also exercises a degree of control over the risk connected with the operation and functioning of the AI-system.”).

  144. 144.

    Osborne Clarke (2021).

  145. 145.

    Richter (2018), p. 52.

  146. 146.

    European Commission (2018).

  147. 147.

    Richter (2018).

  148. 148.

    European Commission (2021b), p. 31.

  149. 149.

    Centre for Information Policy Leadership (2021).

  150. 150.

    Krakowski (2006), p. 321 (explaining “Free and open source software enables a cheaper way to introduce information and communication technologies to citizens of the developing world and by doing so, reducing the digital divide between and within these countries.”).

  151. 151.

    de Laat (2021), p. 1154.

  152. 152.

    Id.

  153. 153.

    Engler (2022).

  154. 154.

    European Commission (2021b), p. 31.

  155. 155.

    Seide Bretthauer (2002).

  156. 156.

    Engler (2022).

  157. 157.

    Yordanova (2022), p. 3.

  158. 158.

    Id.

  159. 159.

    Ebers et al. (2021), p. 591.

  160. 160.

    Engler (2022) (explaining, “The Council’s draft of the AIA includes two exemptions that circumstantially apply to open-source GPAI models, but both have serious problems. The first exemption excludes all AI models that are only used for research and development from the entirety of the AIA. Yet open-source developers are most motivated by the idea of building things that people use, meaning this restriction decreases the incentive to contribute to open-source AI.”)

  161. 161.

    Council General Approach, Article 3(1)(b).

  162. 162.

    See also Parliament General Approach recitals 12(a)(b).

  163. 163.

    Richter (2018), p. 56.

  164. 164.

    Bogucki et al. (2022), p. 9 (explaining, “The AI Act could, for instance, clarify that publishing models and putting them online to access for free (making them open source) would not constitute putting them on the market (thus not triggering the requirements).”).

  165. 165.

    See Bender et al. (2021), p. 615 (arguing that model documentation is necessary “to understand training data characteristics in order to mitigate some of these attested issues or even unknown ones” and that a solution “is to budget for documentation as part of the planned costs of dataset creation, and only collect as much data as can be thoroughly documented within that budget.”).

  166. 166.

    Mitchell et al. (2019).

  167. 167.

    Gebru et al. (2018).

  168. 168.

    Holland et al. (2018).

  169. 169.

    Alsallakh et al. (2022).

  170. 170.

    Arnold et al. (2019).

  171. 171.

    Parliament General Approach, Recital 12(c).

  172. 172.

    AI TechPark (2020) (providing examples of situations where AI has “gone wrong.”).

  173. 173.

    Id.

  174. 174.

    Björling et al. (2020), p. 3 (stating “Lab studies cannot adequately account for the open-ended encounters that happen between people and robots that are context-dependent. Studies in HRI often privilege the technological capabilities of robots over important factors such as social context and needs of a diverse group of users or stakeholders (internal citations removed).”).

  175. 175.

    Tschider (2021), p. 714.

  176. 176.

    National Science and Technology Council (2016), p. 32 (explaining “There are several technical approaches to enhancing the accountability and robustness of complex algorithmic decisions. A system can be tested “in the wild” by presenting it with situations and observing its behavior”); Rooksby et al. (2009), p. 561 (explaining, “Testing now also goes well beyond mitigating defects in programs, looking at issues such as their performance, reliability, internationalisation, and security.”).

  177. 177.

    Rooksby et al. (2009), p. 561.

  178. 178.

    Chamberlain and Crabtree (2019), abstract.

  179. 179.

    Rooksby et al. (2009), p. 561 (explaining, “Testing no longer simply focuses on the technology alone but on socio-technical issues such as acceptability, usability and fitness for purpose. Testing is also increasingly focused on systems rather than software in isolation.”).

  180. 180.

    Smuha et al. (2021), p. 17.

  181. 181.

    Rooksby et al. (2009), p. 561 (explaining, “Initially programmers were encouraged to test other programmers’ work rather than their own because they may be unwilling to admit to their own faults, but the increasing scope and complexity of testing is now serving to make testing a profession in its own right. Many larger software teams include professional testers. Specialist consultancies are often used to supply expertise, particularly in specialist tasks such as performance testing. We are also seeing the emergence of software test centres to which testing can be outsourced.”).

  182. 182.

    Council General Approach, Article 3(48) (“‘testing in real world conditions’ means the temporary testing of an AI system for its intended purpose in real world conditions outside of a laboratory or otherwise simulated environment with a view to gathering reliable and robust data and to assessing and verifying the conformity of the AI system with the requirements of this Regulation; testing in real world conditions shall not be considered as placing the AI system on the market or putting it into service within the meaning of this Regulation, provided that all conditions under Article 53 or Article 54a are fulfilled.”).

  183. 183.

    Council General Approach, Article 54(a).

  184. 184.

    Parliament General Approach, Article 3(44n).

  185. 185.

    European Parliamentary Research Service (2022).

  186. 186.

    Id.; see also Kelly (2023) (Defining the four key benefits of AI Regulatory sandboxes as (1) reducing the time and cost of commercializing AI products and services (2) supporting improved regulation via regulatory learning (3) improving AI standards development processes and enabling the development of standards alongside regulation (4) enabling increased market participation by small and medium sized enterprises (SMEs)); see further Undheim et al. (2023); see also Parliament General Approach Recital (72) and Article 53(1)(d).

  187. 187.

    European Parliamentary Research Service (2022).

  188. 188.

    See further Grady (2022) (acknowledging that sandboxes should not create a “wild west” of AI development but arguing for experimentation clauses to “allow authorities to act with a degree of discretion when enforcing or implementing rules.”).

  189. 189.

    AI Act Commission Proposal, Article 53(1).

  190. 190.

    Id. at Article 55(1)(a).

  191. 191.

    Id. at Article 53(3).

  192. 192.

    Ringe (2023) (stating, “The proposed AI Act does mention the idea of a sandbox on the margin. However, the proposed text resembles more of a symbolic marketing trick rather than a serious utilization of the potential of a sandbox.”).

  193. 193.

    Council General Approach, Article 53(3).

  194. 194.

    Id.

  195. 195.

    Id. at Article 3(52) (“AI regulatory sandbox” means “a concrete framework set up by a national competent authority which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test, where appropriate in real world conditions, an innovative AI system, pursuant to a specific plan for a limited time under regulatory supervision.”).

  196. 196.

    Parliament General Approach, Article 3(44g) (“regulatory sandbox” means “a controlled environment established by a public authority that facilitates the safe development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan under regulatory supervision.”).

  197. 197.

    Parliament General Approach, Article 3(44n) (“testing in real world conditions” means “the temporary testing of an AI system for its intended purpose in real world conditions outside of a laboratory or otherwise simulated environment.”).

  198. 198.

    Council General Approach, Article 3 (2) (“provider” means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed and places that system on the market or puts it into service under its own name or trademark, whether for payment or free of charge); but see also Council General Approach, Recital 72 (stating, “The supervision of the AI systems in the AI regulatory sandbox should therefore cover their development, training, testing and validation before the systems are placed on the market or put into service, as well as the notion and occurrence of substantial modification that may require a new conformity assessment procedure.”).

  199. 199.

    Parliament General Approach Article 53(1)(g) (“Insofar as the AI system complies with the requirements when exiting the sandbox, it shall be presumed to be in conformity with this regulation.”).

  200. 200.

    Parliament General Approach Article 53(3) (“The AI regulatory sandboxes shall not affect the supervisory and corrective powers of the competent authorities, including at regional or local level.”).

  201. 201.

    Parliament General Approach Article 53(4).

  202. 202.

    Caulfield et al. (2012).

  203. 203.

    See generally, Cane and Kritzer (2010).

  204. 204.

    For more, see Engler and Renda (2022).

References

Journals and Articles

  • Allen HJ (2020) Sandbox boundaries. Vanderbilt J Entertain Technol Law 22:299–321

    Google Scholar 

  • Björling EA, Thomas K et al (2020) Exploring teens as robot operators users and witnesses in the wild. Front Rob AI 7(5):1–15

    Google Scholar 

  • Burk DL (2016) Perverse innovation. William Mary Law Rev 58:1–34

    Google Scholar 

  • Caulfield T, Harmon S, Joly Y (2012) Open science versus commercialization: a modern research conflict? Genome Med 4(17):1–11

    Google Scholar 

  • Cohen J (2017) Law for the platform economy. UC Davis Law Rev 51:133–204

    Google Scholar 

  • Colonna L (2022) Artificial intelligence in higher education: towards a more relational approach. Loyola Univ Chic J Regul Compliance 8:18–54

    Google Scholar 

  • Comandè G, Schneider G (2022) Differential data protection regimes in data-driven research: why the GDPR is more research-friendly than you think. German Law J 23(4):559–596

    Article  Google Scholar 

  • De Laat PB (2021) Companies committed to responsible AI: from principles towards implementation and regulation? Philos Technol 34:1135–1193

    Article  Google Scholar 

  • Ebers M, Hoch V et al (2021) The European Commission’s proposal for an Artificial Intelligence Act—a critical assessment by members of the Robotics and AI Law Society (RAILS). Multidiscip Sci J 4(4):589–603

    Google Scholar 

  • Fleischer V (2010) Regulatory arbitrage. Tex Law Rev 89:227–289

    Google Scholar 

  • Floridi L, Luetge C et al (2019) Key ethical challenges in the European Medical Information Framework. Minds Mach 29(3):355–371

    Article  Google Scholar 

  • García K (2019) Copyright arbitrage. Calif Law Rev 107:199–266

    Google Scholar 

  • Herrmann T, Pfeiffer S (2022) Keeping the organization in the loop: a socio-technical extension of human-centered artificial intelligence. AI Soc 38:1523–1542

    Article  Google Scholar 

  • Ienca M, Malgieri G (2022) Mental data protection and the GDPR. J Law Biosci 9(1):1–19

    Article  Google Scholar 

  • Kazim E, Güçlütürk O et al (2022) Proposed EU AI Act—presidency compromise text: select overview and comment on the changes to the proposed regulation. AI Ethics 3:381–387

    Article  Google Scholar 

  • Kelly J, Sadeghieh T et al (2014) Peer review in scientific publications: benefits, critiques, and a survival guide. EJIFCC 25(3):227–243

    Google Scholar 

  • Letourneur D, Joyce K et al (2020) Enabling MedTech translation in academia: redefining value proposition with updated regulations. Adv Healthc Mater 10(1):1–9

    Google Scholar 

  • Marelli L, Testa G, Van Hoyweghen I (2021) Big tech platforms in health research: re-purposing big data governance in light of the general data protection regulation’s research exemption. Big Data Soc 8(1)

    Google Scholar 

  • Partnoy F (1997) Financial derivatives and the costs of regulatory arbitrage. J Corp Law 22:211–256

    Google Scholar 

  • Quinn P (2021) Research under the GDPR – a level playing field for public and private sector research? Life Sci Soc Policy 17(4):1–33

    Google Scholar 

  • Richter H (2018) Open science and public sector information, reconsidering the exemption for educational and research establishments under the directive on re-use of public sector information. Inf Technol E-Commerce Law 9(1):51–74

    Google Scholar 

  • Rooksby J, Rouncefield M, Sommerville I (2009) Testing in the wild: the social and organisational dimensions of real world practice. Comput Supported Coop Work 18:559–580

    Article  Google Scholar 

  • Schneider G (2019) Disentangling health data networks: a critical analysis of Articles 9(2) and 89 GDPR. Int Data Privacy Law 9(4):253–271

    Google Scholar 

  • Seide Bretthauer MD (2002) Open source software: a history. Inf Technol Libr 21(1):3–11

    Google Scholar 

  • Slokenberga S (2022) Scientific Research Regime 2.0? Transformations of the research regime and the protection of the data subject that the proposed EHDS regulation promises to bring along. Technol Regul 2022:135–147

    Google Scholar 

  • Tschider CA (2021) Beyond the “Black Box”. Denver Law Rev 98:683–724

    Google Scholar 

  • Undheim K, Erikson T, Timmermans B (2023) True uncertainty and ethical AI: regulatory sandboxes as a policy tool for moral imagination. AI Ethics 3:997–1002

    Article  Google Scholar 

  • Zawacki-Richter O, Marín V et al (2019) Systematic review of research on artificial intelligence applications in higher education – where are the educators? Int J Educ Technol High Educ 16(39):1–39

    Google Scholar 

Books and Chapters

  • Booth W, Colomb G et al (2016) The craft of research, 4th edn. University of Chicago Press, London

    Google Scholar 

  • Brightman A, Beever J, Hiles M (2019) Next-generation ethical development of medical devices: considering harms, benefits, fairness, and freedom. In: Abbas A (ed) Next-generation ethics: engineering a better society. Cambridge University Press, Cambridge, pp 387–410

    Chapter  Google Scholar 

  • Cane P, Kritzer H (2010) The Oxford handbook of empirical legal research. Oxford Academic. online edition

    Book  Google Scholar 

  • Chamberlain A, Crabtree A (2019) Research ‘in the wild’. In: Chamberlain A, Crabtree A (eds) Into the wild: beyond the design research lab. Studies in applied philosophy, epistemology and rational ethics. Springer, Cham, pp 1–6

    Google Scholar 

  • De Vries K (2022) A researcher’s guide for using personal data and non-personal data surrogates: synthetic data and data of deceased people. In: De Vries K, Dahlberg M (eds) De Lege 2021: law, AI and digitalization. Iustus förlag, Uppsala, pp 117–140

    Google Scholar 

  • Krakowski P (2006) ICT and free open source software in developing countries. In: Berleur J, Nurminen MI, Impagliazzo J (eds) Social informatics: an information society for all? In remembrance of Rob Kling. HCC 2006. IFIP International Federation for Information Processing, vol 223. Springer, Boston, pp 319–330

    Google Scholar 

  • Parlak B, Dogan K (2022) The handbook of public administration, vol 1. Livre de Lyon

    Google Scholar 

  • Slokenberga S (2021) Setting the foundations: individual rights, public interest, scientific research and biobanking. In: Slokenberga S, Tzortzatou O, Reichel J (eds) GDPR and biobanking, Law, governance and technology series, vol 43. Springer, Cham, pp 11–30

    Chapter  Google Scholar 

  • Willesson M (2017) What is and what is not regulatory arbitrage? A review and syntheses. In: Chesini G, Giaretta E, Paltrinieri A (eds) Financial markets, SME financing and emerging economies. Palgrave Macmillan, Cham, pp 71–94

    Chapter  Google Scholar 

Proceedings and Conference Papers

  • Bender E et al (2021) On the dangers of stochastic parrots: can language models be too big? In: Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. Virtual Event Canada, pp 610–623

    Google Scholar 

  • Mitchell M et al (2019) Model cards for model reporting, In: FAT* '19: Proceedings of the conference on fairness, accountability, and transparency, pp 220–229

    Google Scholar 

Online Publications

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Liane Colonna .

Editor information

Editors and Affiliations

Appendices

Legislation

  • Consolidated Version of the Treaty on the Functioning of the European Union (“TFEU”) (2008) OJ C 115/47.

  • Directive (EU) 2019/790 of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC (2019) OJ L-130/92.

  • Directive (EU) 2019/1024 of the European Parliament and of the Council of 20 June 2019 on Open Data and the Re-Use of Public Sector Information (2019) OJ L 172.

  • Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing directive 95/46/EC (General Data Protection Regulation, “GDPR”)(2016) OJ 2016 L 119/1.

  • Regulation (EU) 2017/745 of the European Parliament and of the Council of 5th April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (2017) OJ EU 2017, No. L 117/1.

  • Commission Recommendation (EU) 2018/790 of 25 April 2018 on access to and preservation of scientific information (2018) C/2018/2375 OJ L 134.

  • Sw. Lag (2003:460) om etikprövning av forskning som avser människor (the “Swedish Ethical Review Act”).

Proposed Legislation

Case Law

  • C-215/88 Casa Fleischhandel ECLI:EU:C:1989:331.

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Colonna, L. (2023). The AI Act’s Research Exemption: A Mechanism for Regulatory Arbitrage?. In: Gill-Pedro, E., Moberg, A. (eds) YSEC Yearbook of Socio-Economic Constitutions 2023. YSEC Yearbook of Socio-Economic Constitutions, vol 2023. Springer, Cham. https://doi.org/10.1007/16495_2023_59

Download citation

  • DOI: https://doi.org/10.1007/16495_2023_59

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-55831-3

  • Online ISBN: 978-3-031-55832-0

  • eBook Packages: Law and CriminologyLaw and Criminology (R0)

Publish with us

Policies and ethics