1 Introduction

The development and implementation of artificial intelligence (AI) has gained major attention over the past years. AI has become the centre of discussions within academia [1, 2] and societal actors [3] on whether the digital technology brings great opportunities or great risks for society. In the policy, politics and regulation domain, actors, too, are reacting on this technological development. Many governmental institutions have formulated and published ‘AI strategies’: visions and narratives on what to expect from the technological development and how to manage its implementation in society [4]. One of the most prominent visions was the Artificial Intelligence Package by the European Union (EU), which included both a general approach for AI and a proposal for AI regulations (the AI Act) and was published in April 2021 [5]. The EU is praised for this package and is seen as one of the front runners when it comes to AI regulations [6, 7]. In comparison, the United States White House published their Blueprint for an AI Bill of Rights [8], very similar to the EU AI Act, in October 2022; Australia published a national strategy, but no foundations for a legal frameworkFootnote 1 [9].

The question remains, however, to what extent these visions and strategies translate into day-to-day practice. For this research, I base myself on the research conducted by Johnston [10], Poole and Mackworth [11], Kalogirou [12], Royakkers and Van Est [13], and by the European Commission et al. [14], and consider AI to be a digital technology, implemented to collect, process and analyse data, act on this analysis and, based on the results of the analysis and depending on the type of AI, possibly improve itself without human intervention. AI can thus permeate all types of processes within a sector, but the impact of this permeation depends on the sector and its related processes. For example, AI can be used in the archiving sector to digitalize handwritten notes or non-digitalized books [15], but it can also be used to flag potential fraudulent benefit claims and punish those whose claims were flagged, even without human check of whether these flags were correct, as was the case in the Dutch child-care benefit scandal [16].

Therefore, general AI regulations may not be sufficient. Instead, it is important that the regulation of AI also relates to and becomes part of the specific regulations of each sector. This way, general visions, strategies, and regulation (proposals) are translated into practical frameworks for a sector. It is questionable, however, if sectors, such as the electricity sector, indeed take up AI strategies, especially international strategies and regulation proposals such as those of the EU [17, 18].

To analyse this act of (non-)translation, I focus on the Dutch electricity sector. The Dutch government has been working on a new, comprehensive energy law since 2019. In line with EU energy regulations, the aim of this new Dutch energy law is to support the energy transition, increase the rights for households, increase the capabilities of system operators, and consider (new) digital technologies [19]. As such, it should consider (upcoming) AI regulations, especially where relevant for the electricity sector.

To analyse whether (proposed) AI strategies and energy regulations in the Dutch electricity sector are aligned, "From EU to national sectors" Section of discusses possible processes of translation between the EU AI Act and the new Dutch energy law. In "The EU AI Act" Section, the electricity sector relevant parts of the EU AI Act are described. Here, four elements are especially of interest, namely: the definition of AI, the possible applications of AI in the electricity sector, the risks connected to these applications, how these risks are advised to be dealt with and the control institutions necessary for AI. "The Dutch AI strategy" Section analyses in a similar way the Dutch AI strategy, which was published before the EU AI Act. "The new Dutch energy law" Section regards the new Dutch energy law and focuses on the extent to which the proposed regulations and strategies regarding AI are translated into the new Dutch energy law. In the last section, I discuss the gaps between AI regulations and sectoral regulations, and possible impact on practice for the Dutch electricity sector.

2 From EU to national sectors

There are a couple of ways in which the EU AI Act could spread to the Dutch electricity sector. These ways are shown in Fig. 1. The first way is direct implementation of the EU AI Act by Dutch electricity sector actors. The second way is via digital technologies policies. In this case, the EU AI Act, or fundaments thereof, would be reflected in the Dutch national AI regulations, laws or strategies. These national laws are, in turn, implemented in sectoral laws, such as the Dutch (new) energy law. The third way is via energy sector policies. Following this approach, the EU AI Act would be implemented in EU energy laws. Next, the Dutch new energy law would have to take these EU changes into account. None of the ways are free from issues.

Fig. 1
figure 1

Different ways the EU AI Act could spread to the new Dutch energy law. Source: author

First, the Dutch electricity sector actors could, independently from other regulatory bodies, decide to adopt practices congruent with the EU AI Act. This self-regulation, however, is unlikely to take place and difficult to detect.

The main issue emerging with following the second pathway, concerns the often-substantial gap between (inter)national strategies on instruments such as AI and sectoral implementation. Large technological changes of instruments, such as the uptake of electricity, the implementation of the computer and, now, the development of AI, result in fundamental changes in society [3, 20]. Therefore, such changes require shifts in thinking and working processes. In vision and strategy documents, these shifts are emphasized. Yet, in day-to-day practice, these documents and connected regulations are often treated as an add-on. Companies in the day-to-day practice seem to merely see another compliance to fulfil, instead of a call for fundamental changes.

A recent example of this phenomenon is the way businesses dealt with the General Data Protection Regulation. The initial idea behind this regulation focuses on private persons owning and controlling their own data, regardless of whether it was gathered by a third party, as well as the protection of private data once a person allows a company to collect their data [21]. Its fundament lies in the right to respect for private and family life from the European Convention on Human Rights [22, 23]. Most businesses today, however, treat the legislation as an added compliance: as long as users are asked whether their data can be used and if the company fulfils the security standards, no changes are needed [24]. Businesses truly viewing their users’ data as owned by the private person and not the company, are scarce [25, 26]. Possibly, the changed ideology of private data and private data ownership is soon forgotten in the day-to-day practice.

Not only is it difficult to instigate fundamental changes in sectors using (inter)national strategies, the (inter)national strategies often lack enough concrete, pragmatic frameworks to be practical for implementation in the sectors. As these strategies consider many different sectors, they do not focus on debates specific to one sector. Which actor, be that the sector or an (inter)national governance institution, should fill out these gaps, is often not specified. This results in a chaotic grey zone of (non-)regulation.

Following a third way, via sectoral laws, the EU might encounter debates regarding the principles of proportionality and subsidiarity [27]. These principles entail that actions, such as policies and regulations, should be taken at the lowest possible, effective governance level [28]. As such, EU regulation should only be created in cases it is absolutely necessary for the functioning of the internal market or fundamental aims of the treaties [29]. Additionally, EU Member States are generally hesitant with agreeing to EU energy regulations [30]. Although EU energy policy capacity is increasing with the growing number of EU energy laws, Member States still regard the national energy mix and regulations as fundamental components of their sovereign identity and national control over their economy and vital infrastructures. Changing EU energy laws is therefore a lengthy process and might not be suited for including a framework on the development and implementation of AI in the electricity sector. Currently, no such policy proposals are published [31]. Furthermore, the EU laws on which the new Dutch energy law states it is based, do not refer to AI [32, 33]. Therefore, the remainder of this paper will focus on pathway two.

3 The EU AI act

European Union laws and regulations are typically built on the principles of proportionality and subsidiarity. In the case of artificial intelligence, the European Commission seems to claim exclusive competence, stating on page 6: “[N]ational approaches in addressing the problems will only create additional legal uncertainty and barriers, and will slow market uptake of AI” [19]. With taking on this competence, the EU has created a duty of care for itself: it has also become responsible for drawing up a clear and practical framework for the development and use of AI.

The EU attempts to create such a framework in its EU AI Act. This act is currently in draft as different committees are still reviewing the proposed legislation. In addition to this act, different connected policy documents have been published, ranging from general communications to proposal directives for specific sectors [5]. None of these additional documents directly affect the implementation of AI in the electricity sector beyond the effects of the EU AI Act. Therefore, in this research, the focus lies on the framework the EU AI Act creates for AI in the electricity sector.

To start, the EU AI Act keeps its definition of AI rather broad. The document states on page 39 “artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with” [34]. The approaches indicated by the European commission in Annex I are “(a) [m]achine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning; (b) [l]ogic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; (c) [s]tatistical approaches, Bayesian estimation, search and optimization methods” [34]. Using this definition of AI, the European Commission shapes a wide field of digital technologies over which it has potential competences.

In the EU AI Act, there is an attempt to create more regulation clarity, by introducing risk-levels in AI implementation. AI systems are classified as either unacceptable risk, high risk or lower or minimal risk.Footnote 2 This classification, in so far it applies to the electricity sector AI, is summarized in Table 1. Unfortunately, this categorization is not practical for the electricity sector. In general, although both unacceptable and high-risk levels AI systems are further defined, no attention is given to lower or minimal risk AI systems. Furthermore, following the definition of the different risk levels, all AI systems in the electricity sector fall under high-risk AI systems. This is because high risk AI systems are AI systems which are “intended to be used as safety components in the management and operation of […] heating and electricity, since their failure or malfunctioning may put at risk the life and health of persons at large scale and lead to appreciable disruptions in the ordinary conduct of social and economic activities” as stated on page 40 of the proposal [34]. Actors in the electricity sector can of course argue their AI system is not intended to be used as a safety component in management or operation processes. At the same time, however, control organisations could argue that the AI system is connected to such functions, and therefore malfunctioning of the AI system would result in malfunctioning of the management or operations of electricity. This would result in all AI systems in the electricity sector becoming classified as high risk.

Table 1 Different risk levels as conceptualized by the EU in the EU AI Act. Source: author, based on

If all electricity sector AI systems would be classified as high risk, the EU AI Act promises to establish a framework of legislation for their development and use. Again, however, the draft Act remains abstract, and does not clarify sector specific subjects. For example, page 4 states that high risk AI systems are required to ensure “quality of data sets used, technical documentation and record-keeping, transparency and the provision of information to users, human oversight, and robustness, accuracy and cybersecurity” [34]. What those requirements are, is unclear, just as who will check on these requirements. The European Union Agency for the Cooperation of Energy Regulators (ACER) might not have enough employees or expertise to carry out such a task and therefore, a new control body might be beneficial [35]. Furthermore, it remains to be discussed who can be held responsible for providing high quality data. In the EU AI Act, the implementor of the AI systems is held responsible. Yet, in the electricity sector, the AI implementor often only gathers data. Data production happens at a lower level, at the electricity consumer, and depends on the data permissions the electricity consumer has given the AI system implementor. Holding the AI implementor responsible for this process, would therefore not be fair.

The EU AI Act thus raises many questions regarding AI in the electricity sector. It discusses how data collection and data processing should be of quality, ensure cybersecurity, and ensure privacy preserving measures. It also emphasizes how there should be transparency of the data collected, the context in which these data were collected and who is responsible for safely collecting and storing this data. The specifics of these statements are, however, left out, as well as what institute will be responsible for managing or checking this.

4 The Dutch AI strategy

The Netherlands published its AI strategy plan in 2019, before the EU AI Act. It uses an earlier definition from the European Commission of AI, namely: “systems that exhibit intelligent behaviour by analysing their environment and have a certain degree of autonomy in taking action to achieve specific objectives”Footnote 4 [36]. Although still broad, this definition is narrower than the definition in the EU AI Act. It requires the AI system to go beyond data gathering and analysing by also acting on these analyses.

When it comes to AI implementation in the electricity sector, however, the Dutch AI strategy is not much clearer. A couple applications are mentioned, such as error detection, operational management or future system decision making for electricity systems [36]. No actual actions or visions are connected to these examples. There is some attention for increasing cybersecurity risks.Footnote 5 Interestingly, these risks are noted to be taken care of in a supranational setting. Instead of solving these issues on an EU level, the Dutch government proposes to discuss and prevent cybersecurity attacks on the level of the United Nations [36].

5 The new Dutch energy law

Similar to the EU AI Act, the new Dutch energy law is currently in draft. Multiple committees still have a chance to review the policy and submit amendments. Timewise, it can be argued the two policies are developing in parallel. Yet, when it comes to the contents, the two policies are worlds apart.

Despite the Dutch government connecting AI and the electricity sector in their AI strategy [36], AI is not mentioned in the new energy law. The Ministry of Economic Affairs and Climate, responsible for the draft legislation, acknowledges the digitalization of the energy sector, but does not specify any data processing systems [32, 33]. In general, concerning digitalization, the focus of this draft legislation is on data gathering and data governance. The (AI) systems which use this data remain undiscussed. This holds true even when discussing increasing cybersecurity risks: there is attention for data risks, but no other digital technology related issues.

As such, no new controlling agencies for the development or use of AI in the electricity system are identified. The existing agencies, Agentschap Telecom and Autoriteit Consument and Markt (ACM) will receive additional responsibilities to check on data gathering and governance. No actor is held responsible by this law for possible prospective AI system failures.

Similar to the EU AI Act, the Dutch new energy law also discusses data collection and processing, though from a different perspective. In this proposed law, the emphasis is on ensuring the quality of the ‘data chain’: the process of data collection by different actors and bringing together this data. Additionally, there is a call for making more data available. Privacy is not discussed in the legislation text, although in the notes of the legislation text, there are notions of privacy considerations, such as adhering to existing privacy laws when it comes to data collection and processing. An interesting aspect to emphasize is that in these notes, there seems to be a debate about the balance of collecting enough data for an efficient system and not collecting too much private data. In the notes [33], on page 143, it is stated that the “current legislation draft sufficiently ensures bringing together in a balanced way both the (privacy) interests of citizens, as well as the interests of a good functioning energy system”,Footnote 6 for example, by not using individual but aggregated data.

6 Discussion

This research shows EU laws, national goals and sectoral laws on AI are yet to be harmonized when it comes to the development and use of AI in the Dutch electricity system. Analysing the specific case of AI in electricity systems, some overlap in legislation can be distinguished in a very particular part of AI: data gathering and data governance. In both proposed legislations, the importance of the quality of the data and transparency is emphasized but the two legislations take a somewhat different stance when it comes to focusing on privacy and cybersecurity. As the new energy law focusing on the Dutch electricity sector does not discuss AI systems, and the AI EU Act does not focus on different sectors, the overlap between the draft legislations stops at data. Apart from the overlap in the discussion of data gathering and governance, there are mostly gaps between the different documents, as shown in Fig. 2. There has been an act of non-translation.

Fig. 2
figure 2

Overlap between the Dutch AI strategy and the prospective Dutch energy law and EU AI Act concerning the development and use of AI in the electricity system. Data collection and data processing are clustered, as this refers to the gathering, categorising, and storing of data. In legislation, these processes are often grouped. Source: author

These gaps are concerning, as AI is already being developed and, at a small scale, implemented in electricity systems [37]. It is naïve to belief such developments will skip the EU or the Netherlands, and AI will not be implemented at large scale in the Dutch electricity system in the foreseeable future. Therefore, to ensure AI developers, implementors and users have a framework in which they can safely develop and use AI systems, these gaps should be bolstered with guidance and governance.

The question remains: who will fill out the gaps between the different levels (EU, national and sectoral) of legislation and visions? At the one hand, it feels more natural and more aligned with the principles of subsidiarity and proportionality to let sectors complete EU legislations by filling out the details necessary in their practices. This may result in losing oversight of different legislations, and non-harmonized rules between different EU Member States. For example, the Dutch electricity sector might rule all AI systems in electricity systems as high risk, while at the same time, the Belgian electricity sector, following a different reading of EU legislations, might flag only those AI systems as high risk which directly concern safety components. This could lead to EU internal market inequalities. Therefore, on the other hand, it can be beneficial to let the EU take up the competence it has appropriated regarding AI legislations. That would, however, mean that the EU must be much more detailed in its AI legislation, to the point of specifying for each sector what certain types of AI use means in terms of certifications, security, and sustainable use [38]. In that case, Stahl et al. [35] raise the right question: is a new institutional body necessary for managing the development and use of AI in the EU? Then again, governmental bodies already struggle with attracting enough AI experts. How can a new institutional body deal with the current job market?.

These findings also bring to the fore interesting possibilities for future research. Are there other sectors in which similar gaps between EU AI regulations and sectoral regulations emerge? If yes, the introduction of the EU AI Act into practice should be carefully guided to ensure it will connect with existing regulations. If other sectors do not experience (similar) gaps, the (Dutch) electricity sector could consider reformulating parts of the new energy law, to pre-emptively fill the potential gap. Finally, it would be interesting to analyse how these different regulations fit into areas of cross-sectoral work. For example, how are regulations implemented in the case of electric vehicles infrastructure, where both the mobility and energy sector are involved? Which entity becomes responsible for following what rules and how does this (not) distort the cooperation between the different actors?