The European framework for cybersecurity: strong assets, intricate history

Over the last decade, the European Union (EU) has demonstrated a consistent determination to promote a global, open, stable, and secure cyberspace for everyone. A structured (and chronological) review of key EU documents, reports, and directives on cybersecurity shows that the recommendations from the relevant EU institutions (Parliament, Commission, Council) have been persistent over time, reiterating the same core issues that seem to not yet have been solved after a decade of debates and experts’ advice. Since at least 2012, EU institutions have identified the two domains that are under constant critical observation for the deployment of a coordinated European cybersecurity approach—gaps in policies and poor integration—while the European fundamentals of cybersecurity (both human and physical) have been consistently seen as an asset rather than a liability. However, the progressive de-professionalization of coding that tends to blur the distinction between amateurs and professionals should not be underestimated, as it furtively introduces a new class of risk related to unverified or circularly certified skills. It is therefore recommended that the regulatory framework is expanded to better govern the accreditation/certification of professional cybersecurity experts as well.


Background
Over the last decade, the European Union (EU) has shown a consistent determination to promote a global, open, stable, and secure cyberspace for everyone, and a clear desire "to take a more proactive stance in the discussions (...) on international security in cyberspace" [7, p. 20]. In addition to the identification of a variety of fast-evolving cyber threats 1 , the EU has established a list of sectors, industries, technologies, institutions, and services-such as, but not limited to, "hospitals, energy grids, railways, and the ever-increasing number of connected objects in our homes, offices and factories" [8] 2 -which are considered both critical to our societies and economies, and highly exposed to the risk of digital criminality. Furthermore, the concept of cybersecurity has evolved over the same timeframe: From an original focus on technological weaknesses preventing the EU Member States and the private sector from extracting the potential value from the Internet, it progressively shifted to a broader but more acute concern over: a) Defense and security deficiencies that could harm the "integrity and security of democratic systems" [12, p. 23] and challenge the resilience of essential infrastructure [9][10][11], which should be preserved and protected by all means, especially since the "EU's critical infrastructure and essential services are increasingly interdependent and digitized" [12, p. 5]. b) Detrimental social impact of malicious or criminal use of the Internet [5].
As early as December 2000, the European Commission communicated to the European Parliament and Council a list of important issues, including cybercrime and cybersecurity, to be addressed by the European Council in Nice on 7-8 December [15]. By doing so, the EU initiated the process that brought cybersecurity higher on the priority list of European institutions and agencies. Eventually, in September 2013, the EU defined its first comprehensive Cybersecurity Strategy [16] and has since released an abundant corpus of reports, studies, and, ultimately, a set of policies and directives that have been regularly updated over the years 3 . Undoubtedly, key milestones in the European cybersecurity policymaking process have been the first Network and Information Systems (NIS) Directive [18], the Cybersecurity Act [17,19], followed by the European Strategy for Data (which comprises the Data Governance Act [20], the Digital Markets Act [21], and the Digital Services Act [22] 4 ), and the more recent Data Act [25], or the revised Network and Information Systems (NIS 2) Directive [23]. 1 Namely, ransomware, malware, cryptojacking or hidden cryptomining, email attacks, data breaches, and leaks, distributed denial-of-service (DDoS) attacks, disinformation, non-malicious threats, supply chain threats [12]. 2 The European Commission listed 10 critical sectors, "energy, transport, banking, financial market infrastructure, health, drinking water, waste water, digital infrastructure, public administration, and space" [9, p. 3] and gives the Member States the possibility to "to identify critical entities using common criteria on the basis of a national risk assessment.". 3 See, for example, the cybersecurity timeline of the European Council [13], and related documents. 4 A crisp description of these Acts is provided by ITA [29].
The concrete impact of the decade-long regulatory environment of cybersecurity has been pervasive and highly diversified, from a substantial increase in the awareness of cyber risks [28] to the development and adoption of new artificial intelligence (AI)-or machine learning (ML)-based products and services to counteract cybercrime [1,35,38,39], just to name a couple.
However, after several rounds of updates and improvements to European cybersecurity proposals and regulations, it is also useful to highlight issues whose resolution would further facilitate the transition to a safer digital life in Europe.

Top-down: persistent gaps
A systematic survey of more than 10 years of EU documents, reports, and directives on cybersecurity from a historical perspective shows that the recommendations from the relevant EU institutions (Parliament, Commission, Council) have been persistent over time, reiterating the same core issues that seem to have not yet been solved after a decade of debates and experts' advice. Since at least 2012, EU institutions have identified the two domains that are under constant critical observation for the deployment of a coordinated European cybersecurity approach: (1) gaps in policies and (2) poor integration (between policies, between member states and the EU, and between EU agencies).
At a first sight, one could jump to the hurried conclusion that, to say the least, things do not move very fast in EU institutions. We, however, believe that this is excellent news. In the last decade, despite the repeatedly pinpointed gaps in policies and lack of integration, the capacity itself has never been organically questioned. Competencies, ideas, technologies, and infrastructures have been rightfully pushed toward continuous improvements, but they have never been described in any recommendation, directive, or experts' view as dangerously deficient. In other words-and that is where the good news lies-although there have been recommendations to enhance education, accelerate technology development, and facilitate the deployment of infrastructures related to cybersecurity, the European fundamentals of cybersecurity have been seen as an asset rather than a liability. If the EU had identified a serious problem with poor assets, our institutions, democracies, private sector, and critical infrastructures would be in an alarming situation because it takes several years to build capacity, establish a new industry, secure funding, replace physical assets, hire the expert staff, and, most importantly, define and implement new curricula, and give the appropriate higher education to a generation of students [3,26]. The fact that the core and the bulk of the EU cybersecurity-related recommendations point to policy gaps or integration weaknesses highlights a problem that is easier to solve than the lack or fragility of core assets and skills. It also explains why the main EU initiatives under the "cyber diplomacy" [6,14] framework of action(s) are a pillar component of the solution 5 ; rather than only relying on sanctions, cyber diplomacy is a scheme to facilitate dialogue between those stakeholders who are capable of filling the policy and integration gaps for coordinated European cybersecurity and cyber defense [30]. As Josep Borell, High Representative of the Union for Foreign Affairs and Security Policy, put it: "Beyond strengthening our own cyber resilience, it is in the European DNA to prioritize cooperation and dialogue (...). We want everyone to reap the benefits that the Internet and the use of technologies provide. At the same time, we need effective answers to fast-changing cyber threats. Achieving both objectives will be at the heart of our new EU Cybersecurity Strategy" [4].
The notion of EU cyber diplomacy, therefore, reflects two important attributes of the EU's cybersecurity approach: (1) The EU aims to act as a global standard setter through strong policies, and (2) such vision is rooted in the trust that cybersecurity assets (both physical and human) available across Europe, although perfectible, can bear the edifice of coordinated cyber regulations and defense.

Bottom-up: the need for higher standards for human resources
There is an important proviso to the assertions of the previous section, which forces us to temporarily shift our focus away from policies to dig into the uncomfortable realm of technicalities. Although the application of adequate software engineering processes can reduce the probability of exploitable weaknesses and mitigate their severity, even a small mistake on the part of one of the many members of a developing chain can remain unnoticed for long while fully compromising the security of the systems overall. Hence, we need to temporarily abandon the top-down approach and delve, bottom-up, into some specifics of software construction to better understand how policy can, and sometimes cannot, help with the aforementioned objective of promoting a global, open, stable, and secure cyberspace for everyone.

Overview of the software construction process from a security perspective
For the rest of our discussion, the standard terminology of software engineering as defined by the IEEE Software Engineering Body of Knowledge (SWEBOK) will be used [34]; all quotes in this section belong to the former. From a bird's eye view that will suffice for our purposes, software construction is a process that involves, at least, the successive stages of design, development, operation, and maintenance; these will be referred to together as the life cycle.
Software engineering processes, which are "concerned with work activities accomplished by software engineers to develop, maintain, and operate software," support the overarching process of software construction. Their utility extends to hard, quantifiable goals such as "to measure and improve the quality of software products in an efficient manner." In particular, security is considered a software quality issue and is one of the few key cross-cutting concerns that shows up in all stages of the software life cycle. As such, software systems should, by design, "prevent unauthorized disclosure, creation, change, deletion, or denial of access to information and other resources" but also "tolerate security-related attacks or violations by limiting damage, continuing service, speeding repair and recovery, and failing and recovering securely." Yet, mitigation of risks is not confined to the design stage as, for example, security concerns during software development "may necessitate one or more software processes to protect the security of the development environment and reduce the risk of malicious acts." For this purpose, a set of quality analysis and evaluation techniques, such as software design reviews and static analysis, as well as a set of practical recommendations for programming languages, tools, and coding practices, complements the specialized process of security testing. The latter, which is carried out during the development, operation, and maintenance phases, verifies "the confidentiality, integrity, and availability of the systems and its data" and that the system cannot be misused or abused by a malicious actor directly or through malware.

Vulnerabilities and their causes
Software engineering processes have been shown to reduce the probability and mitigate the severity of exploitable weaknesses when applied carefully [31]. However, the ubiquity of exploitable vulnerabilities in software, the pervasiveness of automatic updates, and their increasing frequency show how difficult it is to achieve those objectives. For example, the Common Vulnerabilities and Exposures (CVE) database included less than 4600 software vulnerabilities in 2000; while, at the time of writing, the number of vulnerabilities covered is in the order of 200,000, a 40fold increase in two decades.
From a security perspective, and thus focusing on exploitable vulnerabilities and not on software quality in general, we would like to emphasize as a root cause the huge asymmetry between vulnerability creation and detection. Exploitable vulnerabilities have been repeatedly shown to be easy to introduce in the code base, even with a single small modification of the source code, whether unnoticed or malicious, by a single programmer [27] or as a result of misunderstanding security concepts [40]. Although significant improvements have been made, reliable detection has remained elusive, both for human auditors and automated tools [37].
As an example, the reader might remember Log4Shell, a critical vulnerability made public in December 2021 that affected millions of servers worldwide [2]. Log4Shell is a vulnerability in Apache Log4j, a logging utility developed and maintained by the Apache Software Foundation and included as a dependency in a wide range of programs from software development environments to security tools and cloud platforms. It was introduced to the code base of Log4j, unknowingly, by a single contributor while adding a feature requested by its community of users. This is one, but not the only, cautionary tale that illustrates the importance of considering a bottom-up approach to cybersecurity as a necessary complement to the top-down regulatory approach.

What can (or cannot) policymakers do?
In contrast to physical security-where the scale of the deleterious effects is in general proportional to the severity of the flaw-in the field of cybersecurity, a single, small vulnerability in a dependency or an underlying communication layer can be enough to compromise the whole system [33]. When adding the ubiquitous Internet of Things into the picture [32]-which is already blurring the boundaries between physical systems and their digital counterparts-we can foresee a frightening scenario in which the weakest of the two sets the height of the security bar, as a malicious actor can now click to commit an act of terrorism, not cyberterrorism [36].
The realms of cybersecurity and traditional security have been merged into one. And the AI revolution will only increase the inherent risks. As the Preamble (51) of EUR-Lex [24] puts it, "cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system's vulnerabilities" and "to ensure a level of cybersecurity appropriate to the risks, suitable measures should therefore be taken by the providers of high-risk AI systems, also taking into account, as appropriate, the underlying ICT infrastructure." It is well understood that it is not the role of policymakers to tackle software issues directly. This is the responsibility of information security experts. In the last two decades, however, there has been a cultural push toward the idea that coding is easy and everybody can be a programmer or a network specialist. Although the democratization of programming and inclusive access to information technologies since elementary education has been strongly supported by the vast majority of stakeholders and regulators, the distinction between (trained) amateurs and certified professionals should not be underestimated. The safety of our lives, both digital and physical, depends on it.
The certifications provided by a myriad of companies or specialized training centers to their own students should not be considered as a sufficient guarantee, for this self-reference often misses the point of regulated education and peer assessment. An expansion of the EU's regulatory framework to cover the accreditation/certification of cybersecurity professional experts would bring the additional benefit of aligning all human resources to the highest shared standards.
In this way, from the bottom up and by raising the standards of human resources, many potential vulnerabilities that software engineering processes failed to mitigate, which evaded the scrutiny of human auditors and fooled the state-of-the-art automatic detection tools, will never enter the code base of the software that supports the operations of critical infrastructure in the EU and the world, cutting off cyber threats and their physical counterparts at their roots.

Conclusion
As mentioned, EU institutions aim to become a global standard-setter through clear, sharp policies and regulations, as has been achieved with recent legal texts (see Dig-K ital Markets Act [21], Digital Services Act [22], High common level of cybersecurity across the Union [23], Artificial Intelligence Act [24], and the Data Act [25]). The Subsidiarity section of the Digital Markets Act clearly states that the unfair practices of gatekeepers impede start-ups and smaller businesses from offering better, diversified products at more competitive prices [21]; this is particularly relevant in this context because resilience in the cybersecurity landscape requires diversification of the underlying infrastructure and core assets-diversification as opposed to acceptance of monopolistic practices, in order to mitigate the inherent risk of cyber monoculture.
The EU can afford such an ambitious scope, precisely because it does not lack the required core assets. But also those valuable assets should be safeguarded from poor operations; capacities, competencies, ideas, technologies, and infrastructures related to cybersecurity require high standards of operations, education, programming, design, and production. Or else, we risk shifting the current challenges of integration and policies to a harder challenge: of rebuilding core assets, which would take no less than 10 years to handle.