1 Introduction

As we write this article, it is evident that there are ever-increasing advancements in artificial intelligence (AI) technologies, a rapid adoption of AI-based products and services and national efforts to provide safeguards against negative consequences of AI. The European Commission’s Communication Report defines AI as: “Artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions—with some degree of autonomy—to achieve specific goals” [17]. The International Data Corporation, a market intelligence firm, estimates that the worldwide AI market will reach a compound annual growth rate (CAGR) of 18.6% in the 2022–2026 period, peaking at 900 billion dollars in 2026 [22].

Beyond AI’s potential, it is also a prominent example of a technology where cyber risks are becoming an alarming threat [51]. As adversarial actors are actively acquiring knowledge and skills to enhance the efficacy of their attacks, AI technology is becoming a focal point of attack due to its ever-increasing economic and social significance. Whilst AI systems are susceptible to attacks that are commonly encountered by traditional software, they are also vulnerable to specific attacks that aim to exploit their unique architectures based on knowledge of how such AI models operate. Furthermore, in AI systems, data can be weaponized in novel ways, necessitating changes in data collection, storage, and usage practices [8].

In response to such cyber threats, the European Union Agency for Cybersecurity (ENISA) has recently released a report that delineates the prevailing cybersecurity and privacy threats, as well as vulnerabilities inherent in AI use cases [14]. The analysis primarily concentrates on the identification of threats and vulnerabilities associated with machine learning techniques, while also considering broader aspects of AI systems. The field of AI presents several unresolved challenges that necessitate further research including: attaining verifiability, reliability, explainability, auditability, robustness, and unbiasedness in AI systems.

Additionally, the quality of datasets emerge as a critical concern, as: (a) the maxim “garbage in/garbage out” highlights the requirement for high-quality inputs to yield satisfactory outputs; and (b) unwanted biases could emerge due to unbalanced datasetsFootnote 1. These issues are listed as open research questions by ENISA, alongside the need for designing more attack resilient AI systems. The regulatory concern over AI cyber risks was also noted in 2020, with the release of the document on the EU’s Cybersecurity Strategy for the Digital Decade [15], maintaining: “Cybersecurity must be integrated into all these digital investments, particularly key technologies like Artificial Intelligence (AI), encryption and quantum computing, using incentives, obligations and benchmarks”. The need for improved cybersecurity measures in AI systems extends beyond the European Union. The Center for Security and Emerging Technologies in the United States has also underscored the urgency for policymakers to swiftly and efficiently address potential avenues for reducing cyber vulnerabilities in the realm of AI [29].

The proposed AI Act by the European Union seeks to establish a comprehensive regulatory framework for AI systems, with a primary focus on addressing ethical and legal considerations yet it also recognizes and emphasises the significance for cybersecurity within AI systemsFootnote 2. During the same period the AI Act was being discussed and developed, blockchain was posing similar techno-regulatory concerns around the World particularly due to its use in cryptocurrencies for which technology-focused regulation was proposed and eventually the EU’s markets in crypto-assets (MiCA) regulation was passed [12]. While aspects of blockchain and other distributed ledger technologies (DLT), particularly their decentralised nature and immutable offerings, have posed challenges to regulators, we see their potential to fill compliance and risk gaps the AI Act leaves. We herein suggest how blockchain affordances can be used to mitigate certain AI-related cyber issues increasing the overall security of AI-based systems. In this article we examine how blockchain and DLT can enhance compliance with the EU AI Act and further reinforce cyber security measures.

In a widely complex eco-system such as AI and cybersecurity, academic literature has been generally focused on either the technical or purely legal aspects, creating an interdisciplinary gap that requires further attention. On the technical side, considerable attention has been devoted to exploring the diverse range of cybersecurity challenges associated with AI models [1, 25,26,27]. A plethora of studies have been conducted to delve into the technical aspects and vulnerabilities that arise in AI systems [33, 50]. Studies have investigated various dimensions of AI security, aiming to identify potential attack vectors and develop effective defence mechanisms. It is worth noting that this field, like the development of the technology itself, is highly dynamic and continuously evolving. As attack techniques are becoming increasingly complex and sophisticated, there is a need for ongoing research to uncover new vulnerabilities and develop robust countermeasures.

On the regulatory side, several studies explore the connection between AI and cybersecurity. For example, a study by Andraško et al. (2021) analyses the regulatory intersections between AI, data protection and cyber security within the EU legal framework [4]. Biasin and Kamenjasevié (2022) [6] examine cybersecurity issues of medical devices from a regulatory standpoint, underlining novel challenges arising from the AI Act and NIS 2 Directive proposals. Comiter (2019) [8] has highlighted the disconnect between cyber policy and AI systems. The author asserts that effectively addressing cyber issues associated with AI necessitates novel approaches and solutions that should be explicitly incorporated into relevant regulations. In a similar direction, a study by Mueck et al. (2023) [34] examines the upcoming European Regulations on Artificial Intelligence and Cybersecurity and provide an overview on the status of related policy actions related to cyber regulation in AI. Ellul (2022) [12] argues that regulation should not be AI-specific but focused on software used in specific sectors and activities, and later, Ellul et al. (2023) [13] propose the need for techno-regulatory solutions to support software (and AI) related regulation.

When it comes to the intersection between blockchain and AI, Shinde et al., (2021) [43] have performed a bibliometric and literature analysis of how blockchain provides a security blanket to AI-based systems. Likewise, Mamoshina et al. (2017) [31] review emerging blockchain applications specifically targeting the AI area. They also identify and discuss open research challenges of utilising blockchain technologies for AI. Furthermore, the authors converge blockchain and next-generation AI technologies as a way to decentralise and accelerate biomedical research and healthcare. Short et al. (2020) [44] examine how blockchain based technologies can be used to improve security in federated learning systems.

To the best of our knowledge, there is a lack of research examining whether blockchain could serve as a tool for achieving compliance with legal AI cybersecurity requirements. In line with Ellul et al. (2023) [13], who maintain that the problem of technology regulation can be also addressed through the use of technology itself, in the following paragraphs we aim to examine how blockchain can be used to mitigate certain cybersecurity risks and attacks related to high-risk AI systems and to what extent these measures meet some of the cyber requirements positioned in the AI Act.

More specifically, we propose that blockchain can (a) mitigate certain cyber attacks, such as data poisoning in trained AI models and datasets. Likewise, by employing decentralised infrastructure and blockchain technology, (b) AI systems can benefit from cryptographically-secured guardrails, reducing the likelihood of misuse or exploitation for adversarial purposes. Furthermore, we explore (c) how developers can restrict AI’s access to critical infrastructure through tamper-proof decentralised infrastructure such as blockchains and smart contracts. Additionally, we examine (d) how blockchain can enable secure and transparent data sharing mechanisms through decentralised storage, augmenting data integrity and immutability in AI systems. Furthermore, we analyse (e) how blockchain facilitates independent audits and verification of AI systems, ensuring their intended functionality and mitigating concerns related to bias and malicious behaviour.

By leveraging blockchain technology, AI systems can align with some of the requirements mandated in the AI Act, specifically in terms of data, data governance, record-keeping, transparency and access control. Blockchain’s decentralised and tamper-proof nature helps address some of these requirements, providing a potential foundation for accountable and trustworthy AI systems. Through this research, this article sheds light on the potential of blockchain technology in fortifying high-risk AI systems against cyber risks, contributing to the advancement of secure and trustworthy AI deployments (both in the EU and beyond) and to guide policy makers in their decisions concerning AI cyber risk management.

The rest of the paper is organised as follows: in Sect. 2, we provide a general overview of the cybersecurity risks in AI systems emphasising attack vectors relevant for our analysis. In Sect. 3, we touch upon the AI Act and cybersecurity. In Sect. 4, we delve into analysing the application of blockchain as a cybersecurity tool in mitigating certain cyber risks of AI, in parallel with some of the requirements of the AI Act. In Sect. 5 we present some closing thoughts before concluding the article.

2 AI: security vulnerabilities and attack vectors

Under the hood, AI systems typically make use of machine learning, logic based reasoning, knowledge driven approaches, target-driven optimisation (given some fitness function), or some other form of statistical technique. Indeed, the definition of AI has been debated for decades and it is not the intention of this paper to add to this debate, and neither support a particular definition of AI or what should be classified as AI or not. Yet, we discuss solutions that blockchain can pose for many types of AI systems (and potentially all systems depending upon one’s definition of AI).

Many such AI systems have the capability to operate within the realm of human-defined objectives, generating a spectrum of outputs that exert profound influence over the environments they interact with for example consider AI algorithms used to moderate, filter, and promote different content which can sway the public narrative. Through their intrinsic computational prowess, AI systems can manifest as tools for generating high-quality content, making accurate predictions, offering personalised recommendations, and rendering impactful decisions. If done right, these outputs possess the potential to reshape industries, optimise processes across a broad spectrum of domains and affect the fabric of society [45].

Upon collecting information, AI system engineers need to develop into such systems a profound process of interpretation, potentially leveraging vast knowledge repositories to extract meaning, identify patterns, and draw insights from the past data and/or the data at hand. Armed with this synthesised understanding, they are used to perform intricate reasoning, whilst contemplating a multitude of factors, associations, and dependencies to arrive at informed decisions. By integrating logical frameworks, probabilistic reasoning, and pattern recognition techniques, AI systems possess the aptitude to unravel complex problems, devise innovative strategies, and chart a course of action tailored to achieving their prescribed goals [10, 30].

However, AI systems are not impervious to vulnerabilities or weak points, as they can be targeted by various means, including attacks that exploit their inherent architecture, limitations, or weaknesses [26]. These attacks can encompass a wide range of techniques, targeting underlying algorithms, data inputs, which may even involve exploiting physical components connected to AI systems. The susceptibility of AI systems particularly arises from their complex and interconnected nature, which creates many opportunities for adversaries to exploit potential weaknesses in their design, implementation, or deployment. In certain situations, AI systems may need specific cybersecurity defence and protection mechanisms to combat adversaries [26]. While one cannot ensure a fully secure AI system [51], in the following sections we take a close look at some prevalent cybersecurity risks concerning AI systems and how they can be mitigated with the help of blockchain technology.

2.1 AI attack vectors: data and humans

This article does not aim to provide a comprehensive overview of all AI cyber attacks, as it is a complex and extensive topic that warrants volumes of literature. Yet, we will focus on specific vulnerabilities and threats, for which blockchain can be a useful tool. In particular, we discuss data and human factors as potential attack vectors that can be exploited to target AI systems. The explanations provided are not exhaustive but serve as illustrative examples to enhance readers’ understanding in the second part of the article.

2.1.1 Data-focused attacks

Input attacks involve manipulating inputs that will be fed into an AI system in aim of achieving the attacker’s desired outcome to alter the system’s output [8]. Since AI systems function (like ‘machines’) that take input, perform computations, and generate output, manipulating the input can enable attackers to influence the system’s output. The importance that data plays throughout the lifecycle of such systems cannot be overestimated, from the building and validation of such systems to its live operation, it is at the core of the learning process of machine learning models. One of the most prevalent input attack vectors involves poisoning (i.e. manipulating) data utilised to train such models [2, 46]. Data poisoning attacks are a major concern in AI cybersecurity as they can cause substantial damage that can lead to undesirable socio-economic consequences. Consider a scenario where a public sector AI system is used to calculate levels of social help that should be given to (poor) families. Then consider that an attacker could poison the data so that the system delivers a result that particular types of families are not entitled to support.

Likewise, consider an attack scenario where the attacker has gained access to the training data and is able to manipulate it, such as incorrect labels or biased information. This attack leverages the vulnerability of machine learning models to the quality and integrity of training data. If the attacker can inject poisoned data that influences the model’s learning process, they can alter its decision boundaries and compromise its performance [52]. Data poisoning attacks can occur at different stages, including during data collection, processing, or labelling. Adversaries may use various techniques, such as injecting biased samples, modifying existing data points, or even tampering data within the training pipeline itself [37]. Arguably, data is the “water, food and air of AI”—and therefore, poisoning the data, one can attack the whole (or most) of an AI system [8].

Another similar form of attack targets deep neural networksFootnote 3. Here, the attacker introduces subtle modifications in an attempt to manipulate the AI system’s predictions. For example, attacks such as projected gradient descent (PGD) and square attack exploit the model’s sensitivity to small and carefully crafted perturbations in the input data, causing the deep neural networks to produce false predictions [49].

As noted, data alterations can be carefully designed to deceive the system, causing it to produce incorrect or biased results. These attacks can be challenging to detect, especially if the modifications are carefully designed to evade detection mechanisms or maintain normal system functioning in non-attack scenarios. A non-exhaustive list of data-focused attacks is presented in Table 1 below.

Table 1 Description of types of data-focused attack

2.1.2 Human-focused attacks

Attackers may attempt to manipulate or deceive individuals with access to the system, such as administrators or users, into revealing sensitive information, sharing credentials, or performing actions that compromise the system’s security. Likewise, developers play a key role in building, maintaining, and securing AI systems. Developers typically have privileged access to underlying code, infrastructure, datasets, and configuration settings of AI systems. They possess the technical knowledge and expertise required to modify, update, and maintain such systems. However, their access also presents a potential vulnerability that can be exploited by malicious actors through various means including social engineering.

Consider a code alteration type of attack, where a malicious party gains access and a modification is made to the code of an AI system (which may include model parameters) in order to manipulate its behaviour or achieve malicious objectivesFootnote 4. While this could be also said for other types of systems, one of the main differences (between traditional systems and AI-based systems) is that such changes may result in system behaviour that still seems to be correct. Also, code alterations in high-risk AI systems, can have detrimental consequences to users and society in general. For example, an autonomous driving system relies on computer vision algorithms to detect traffic signs. In a code alteration attack, an attacker could modify the source code responsible for sign recognition to deliberately misclassify stop signs as yield signs. This alteration could lead to potentially dangerous situations on the road, as the autonomous vehicle may not respond correctly to the altered signs.

This brings to light the importance of access control protection for developers and other important stakeholders as an essential security measure. Isaac et al. [24] maintain that if developers’ access is not properly protected, attackers may gain unauthorised access to their accounts or exploit their privileges to modify the code, inject malicious components, or introduce vulnerabilities in the AI system. Moreover, developers often have access to sensitive data used in AI systems. Inadequate access controls can expose this data to unauthorised access or increase the risk of data theft, leading to breaches of confidentiality and potential harm to individuals or organisations.

3 The AI ACT and cybersecurity

Following the European Commission’s release of its long-awaited proposal for an AI regulatory framework in April 2021 [16], there has been notable progress among EU institutions and lawmakers in establishing the EU Artificial Intelligence Act (hereafter: AI Act). The AI Act aims to fulfil the commitment of EU institutions to present a unified European regulatory framework addressing the ethical and societal dimensions of AI. Once enacted, the AI Act will have binding effects on all 27 EU Member States, marking a significant milestone in the regulation of AI at the European levelFootnote 5.

While the AI Act primarily focuses on ethical and legal aspects of AI, it also addresses the importance of cybersecurity in AI systems. In relation, the AI Act emphasises the need for AI systems to be designed and developed with cybersecurity in mindFootnote 6. It requires that AI systems incorporate appropriate technical and organisational measures to ensure their security and resilience against cyber threats. For example, the AI Act mandates that AI developers and deployers conduct thorough risk assessments to identify potential cybersecurity risks associated with their systems [40]. Based on the risk assessment findings, organisations are required to implement appropriate mitigation measures to reduce the identified risks and enhance the cybersecurity posture of the AI system.

Furthermore, the AI Act recognizes the importance of data security in AI systems. It requires that personal and sensitive data used by AI systems be adequately protected against unauthorised access, disclosure, alteration, and destruction. The Act also promotes the use of privacy-enhancing technologies to safeguard data privacy and confidentiality. Furthermore, it emphasises the importance of transparency and explainability in AI systems, which includes cybersecurity aspects. It requires that AI systems be designed in a way that allows auditors and regulators to assess the system’s security measures, including cybersecurity controls, to ensure compliance with regulatory requirements. In the event of a cybersecurity incident or breach involving an AI system, the AI Act requires incident reporting to the relevant authoritiesFootnote 7. It also encourages cooperation and information sharing among stakeholders to address and mitigate cybersecurity risks collectivelyFootnote 8. The AI Act introduces a voluntary AI conformity assessment framework, which may include cybersecurity criteria. The framework allows AI systems to obtain certification to demonstrate their compliance with the Act’s requirements, including cybersecurity measuresFootnote 9. The AI Act designates supervisory authorities responsible for overseeing compliance with the Act’s provisions, including cybersecurity requirements. The authorities will have the power to audit, assess, and enforce compliance with the measures outlined in the Act—aspects of the approach have similarities to what was proposed by the Malta Digital Innovation Authority [13].

The AI act categorises AI systems into four risk levels: unacceptable riskFootnote 10, high risk, limited risk, and minimal risk. Each category is subject to specific regulatory requirements, determined by the potential harm they may cause to individuals and society. The proposal clarifies the scope of high-risk systems by adding a set of requirements. AI systems listed in Annex III of the AI Act shall be considered high-risk if they pose a “significant risk” to an individual’s health, safety, or fundamental rights. For example, high risk AI systems listed in Annex III include those used for biometrics; management of critical infrastructure; educational and vocational training; employment, workers management and access to self-employment tools; access to essential public and private services (such as life and health insurance); law enforcement; migration, asylum and border control management tools; and the administration of justice and democratic processes [18]. With the goal of diminishing risks and cutting down expenses related to risk reduction strategies, our focus in this article centres primarily on the high-risk category. This specific category not only holds significance but also offers an avenue for leveraging supplementary measures, like blockchain-based tools.

It is worth noting that, the AI Act and the NIS 2 (Network and Information Systems) Directive share significant commonalities in terms of cyber security requirements. Both the AI Act and the NIS 2 Directive adopt a risk management approach to cybersecurity. They emphasise the importance of identifying and assessing risks associated with AI systems and critical information infrastructure, respectively. Furthermore, both frameworks impose obligations on relevant stakeholders to ensure the security of their systems. The AI Act requires AI developers and deployers to incorporate appropriate technical and organisational measures to ensure the security and resilience of their AI systems. Similarly, the NIS 2 Directive mandates operators of essential services and digital service providers to implement robust cyber security measures to protect critical infrastructure. Likewise, both frameworks designate supervisory authorities responsible for overseeing compliance with their cybersecurity provisions. These authorities have the power to audit, assess, and enforce compliance with the requirements outlined in the AI Act and the NIS 2 Directive. Among other things, their role is to ensure that relevant stakeholders adhere to robust cybersecurity practices and measures. In addition, ENISA recently released a report on providing an overview of standards (existing, being drafted, under consideration and planned) related to the cybersecurity of AI, assessing their coverage and identifying gaps in standardisation [9]. The report examines the role of cyber security within a set of requirements outlined by the AI Act such as data, data governance, record keeping, risk management, etc.

Overall, the AI Act recognizes the significance of cybersecurity in AI systems and establishes measures to ensure their resilience against cyber threats. By incorporating cybersecurity requirements, risk assessment and mitigation, data security, transparency, incident reporting, and compliance mechanisms, the AI Act aims to promote the safe and secure deployment of AI technologies in the European Union.

4 Blockchain for AI: a tool to achieve compliance with cyber and data security requirements under the EU AI act

4.1 Data integrity and immutability

Data integrity and immutability are critical aspects of ensuring the reliability, security and trustworthiness of AI systems. The AI Act highlights the significance of employing high-quality training data, unbiased datasets, and ensuring that AI systems are not trained on discriminatory or illegal data. The Act states that data quality should be reinforced by the use of tools that verify the source of data and the integrity of data (i.e. to prove that data has not been manipulated). It also underlines that access to data should be limited to those specifically positioned to access it. Article 15 of the AI Act argues for the implementation of “technical solutions to address AI specific vulnerabilities including, where appropriate, measures to prevent and control for attacks trying to manipulate the training dataset (‘data poisoning’), inputs designed to cause the model to make a mistake (‘adversarial examples’), or model flaws”.

Blockchain technology offers a robust solution to address these concerns by providing a decentralised and tamper-resistant ledger for securely transferring, storing and verifying data [53]. Indeed, it must be noted that data stored in a public blockchain implies that the data would be available for anyone to see, yet various different techniques may be adopted to both: (i) ensure data is kept private (and not directly stored on a public blockchain); and (ii) ensure data integrity can be upheld (through storing cryptographic hashes of data on a blockchain).

Blockchain’s immutability feature mitigates these risks by creating a permanent record of data transactions that cannot be altered or tampered with. When data is recorded on the blockchain, it is stored across all nodes in the network, forming a decentralised and synchronised ledger. New data, such as the addition or modification of training data, is cryptographically linked to previous transactions, creating a chain of blocks that is resistant to modification [38]. This ensures that once data is added to a blockchain, it becomes impossible or infeasible to alter or manipulate without the consensus of the network participants. Any attempts to tamper with the data would require significant computational power and/or consensus among the majority of network participants, making it economically and practically infeasible. Furthermore, applications digitally sign data transmitted to a blockchain, and therefore it would be possible for an application to verify whether any data the application itself has submitted has since been manipulated.

These features of immutability and verifiability can further help applications to comply with the AI Act’s proposition regarding incorporating ‘logs’ in AI-based systems. The EU emphasises the need of having high-risk AI systems designed and developed with capabilities enabling the automatic recording of events (logs) during operation of such systemsFootnote 11.

By leveraging blockchain for data integrity, AI systems can maintain a reliable and verifiable record of training data used—indeed, as discussed, consideration would need to be given with respect to the type of blockchain used (public/permissioned/hybrid) and the extent to what data is stored on the blockchain (e.g., raw data on-chain, or cryptographic hashes on-chain with off-chain raw data, or some other suitable configuration). To further emphasise the point, blockchains can facilitate data provenance, date, and time of recording and other characteristics. This can also enable transparency and trust in data sources and provide a means to verify that AI models are trained on accurate and untampered data. As discussed, indeed, the characteristics of blockchain technology align with several requirements outlined in the AI Act, specifically in relation to data and data governance, record-keeping, transparency, and the provision of information to users.

It is important to note that while blockchains ensure data integrity and immutability, they do not guarantee the quality or accuracy of the data itself. Blockchain technology can provide assurances that the data has not been tampered with, but it does not address the issue of data bias, incompleteness, or representativeness. Ensuring the quality and reliability of the data used for training AI systems remains a separate challenge that requires additional research.

4.2 Data sharing

According to the AI Act: “European common data spaces established by the Commission and the facilitation of data sharing between businesses and with government in the public interest will be instrumental to provide trustful, accountable and nondiscriminatory access to high quality data for the training, validation and testing of AI systems”.Footnote 12 Moreover, the AI ACT maintains that in order to facilitate the development of high-risk AI systems, specific actors, including digital innovation hubs, testing experimentation facilities, researchers, experts etc. should have access to and utilise high-quality datasets within their relevant fields of activities, as per the guidelines set by this Regulation. In relation, secure data sharing and storing can become critical concerns when it comes to collaborative AI training systems involving multiple parties.

Blockchain technology can provide solutions that enable secure data sharing among parties, facilitating collaboration while maintaining data privacy to a certain extent. Although still developing, the field of privacy preserving blockchain solutions is on the rise. Bernabe et al. [5] discuss novel privacy-preserving solutions for blockchains, where users can remain anonymous and take control of their personal data following a self-sovereign identity (SSI) model. Moreover, Dusk network leverages zero-knowledge technologies to allow for transactions on the blockchain to benefit from confidentiality [11]. In other words, the network acts like a public record with smart contracts that store relevant information in a confidential fashion, thus solving the shortcomings of similar platforms, such as Ethereum. Furthermore, Galal and Youssef [21] build a publicly verifiable and secrecy preserving blockchain based auction protocol to address privacy concerns.

Blockchain along with secure multiparty computation (MPC) techniques can be used to allow multiple entities to collectively train AI models while keeping their individual data private whilst at the same time providing guarantees with respect to future verifiability of the data such models were trained on. MPC enables computation on encrypted data, ensuring that no participant gains access to another party’s sensitive information [7, 28]. In this case, the blockchain serves as a trusted intermediary that orchestrates the computation and provides guarantees in respect to the integrity of the training process.

Likewise, through the use of smart contracts, the rules and protocols for data sharing and collaborative training can be defined and enforced on the blockchain [35]. Smart contracts could be used to specify the conditions under which data can be accessed, processed, and shared among the participating entities—yet it is important to note that control to accessing such data needs to be handled by a centralised component (since all data on a public blockchain is publicly available). This can help ensure that data sharing occurs in a controlled and auditable manner, promoting transparency and trust among participants. By leveraging blockchain for auditable data sharing, participants can retain ownership and more control over their data (stored off-chain) while still able to benefit from collective intelligence and insight gained through collaborative AI training.

The InterPlanetary File System (IPFS) is a distributed file system that provides a decentralised approach to storing and sharing files across a network [23]. It enables secure and efficient content addressing, making data retrieval resilient to censorship and data corruption in a public manner, i.e. all data is publicly available. IPFS uses content-addressable storage, which ensures that files are uniquely identified by their content rather than their location, thus enabling tamper-resistant data sharing since any change in content would result in a different file address (the address and the content are intimately linked).

Overall, decentralised data sharing aligns with the principles and objectives outlined in the AI Act by promoting transparency and accountability. The AI Act places importance on data protection and security. Decentralised data sharing can enhance data tamper-proofness by utilising cryptographic techniques, access controls, and distributed storage mechanisms. By distributing data across a network of nodes, decentralised systems reduce the risk of a single point of failure. Moreover, the AI Act emphasises the rights of individuals regarding their data and the necessity for obtaining explicit user consent. Decentralised data sharing aligns with these principles by giving users greater control over their data. Through decentralised technologies like blockchain, users can directly manage and grant access to their data, ensuring that their consent is obtained and that they have a say in how their data is used yet the actual storage providers (whether centralised or decentralised) must still be trusted to release data only when such blockchain-based access control policies are followed.

Furthermore, the AI Act emphasises the ethical implications of AI systems, including fairness, accountability, and non-discrimination. Decentralised data sharing can support these ethical considerations by enabling collective decision-making, facilitating consensus, and transparent governance models [36]. These features promote fairness, accountability, and can help prevent discriminatory practices in data sharing and AI system development since the actual development and learning processes become more open and democratised. Likewise, the AI Act promotes interoperability and data portability to foster competition and innovation. Decentralised data sharing can facilitate interoperability by enabling different AI systems to access and utilise data from various sources in a standardised and tamper-proof manner. It may also facilitate data portability, as users can easily share their data across different platforms or services without being locked into a specific provider’s ecosystem provided that standardised interfaces or means of connecting such different systems/data models are made available.

4.3 Auditing and accountability

Auditing and accountability are crucial aspects in ensuring the responsible and ethical deployment of AI systems [32]. Many of today’s AI systems are closed-source. Without access to the code and algorithmic details, it becomes hard-to-impossible/infeasible to identify whether biases exist within models. Moreover, without access to source code, external entities such as experts, auditors, or regulatory bodies face challenges in conducting thorough audits or assessments of a system’s fairness, bias, or potential vulnerabilities. Likewise, code alterations and data poisoning attacks might be harder to detect in closed systems.

The AI Act states the obligation for ex ante testing, risk management and human oversight to minimise the risk of erroneous or biased AI-assisted decisions in critical areas such as education and training, employment, important services, law enforcement, and the judiciaryFootnote 13. The proposed regulation puts a high importance on both audit and transparency. For example, under Annex 7 the document states that: “the body shall carry out periodic audits to make sure that the provider maintains and applies the quality management system and shall provide the provider with an audit report. In the context of those audits, the notified body may carry out additional tests of the AI systems for which an EU technical documentation assessment certificate was issued.”

The AI Act specifies that for high-risk AI systems, the design should prioritise sufficient transparency to enable users to interpret the system’s output and utilise it appropriately. As noted, it is essential to establish an appropriate level and form of transparency to ensure compliance with respective obligations.

In relation, ENISA acknowledges the existing techno-legal gap concerning transparency in AI systems and their importance for security. For example, it maintains that: “The traceability and lineage of both data and AI components are not fully addressed. The traceability of processes is addressed by several standards related to quality. In that regard, ISO 9001 is the cornerstone of quality management. However, the traceability of data and AI components throughout their life cycles remains an issue that cuts across most threats and remains largely unaddressed”. The regulatory document emphasises that that documentation in itself is not a security requirement, and that for a security control, technical documentation is needed to ensure system transparency.

Blockchain technology offers unique features that can enhance both the transparency and auditability of AI systems, enabling stakeholders to hold them accountable for their actions. One of the key advantages of blockchain is its inherent transparency. By recording the entire lifecycle of an AI model on the blockchain (or proof of the lifecycle to minimise on-chain data), including the data sources used for training, the algorithms employed, and any subsequent updates or modifications, a verifiable trail is established. This comprehensive record enables auditors and regulators to trace the decision-making process of the AI system, ensuring that it adheres to ethical standards, legal requirements, and established guidelines. The transparency of blockchain-based audit trails can help identify potential biases in AI systems. Biases can arise from various sources, including biased training data or discriminatory algorithmic design. With blockchain, relevant stakeholders including auditors can examine the inputs, processes, and outputs of an AI system and detect any potential biases or discriminatory patterns. This visibility fosters accountability and allows for necessary interventions to mitigate biases and ensure fair and equitable outcomes. Furthermore, blockchain’s immutability ensures the integrity and tamper-resistance of the audit trail. Once recorded on the blockchain, the information becomes practically unalterable, preventing unauthorised modifications or tampering.

This feature ensures that the audit trail remains reliable and trustworthy, bolstering confidence in the accountability and transparency of AI systems. The use of blockchain technology also facilitates cross-organizational audits and accountability. Multiple stakeholders, including developers, data providers, regulators, and end-users, can access the blockchain-based audit trail and contribute to the auditing process. This collaborative approach enhances the effectiveness of audits, promotes shared responsibility, and strengthens the overall accountability framework surrounding AI systems. This is in line with the AI Act and can serve as an effective tool to enforce reliable and more effective audits. In addition, incorporating blockchain as a tool, could reduce the need of human oversight as noted by the Article 14 of the AI Act—since rules could be encoded into a blockchain system and smart contracts that guarantee a system’s compliance.

Overall, by leveraging blockchain technology, AI systems can better enforce auditability requirements specified in the AI Act. The immutability, transparency, traceability, consensus mechanisms, smart contracts, and data security features of blockchain contribute to establishing a trustworthy and auditable framework for AI systems. This enables auditors to examine compliance, fairness, and accountability aspects of AI operations, promoting transparency and responsible AI development and deployment.

4.4 Identity and access management

As noted in Sect. 2, identity and access management is a crucial aspect of ensuring the security of AI systems. Along the same lines, the AI Act specifies the need for access control policiesFootnote 14 including a description of roles, groups, access rights, and procedures for granting and revoking access. Under the Article 15, the AI Act aims to ensure that appropriate access control is established in high-risk AI systems to provide resilience against attempts by unauthorised parties to exploit the system. A more detailed description of access control is given under the technical guidelines for the implementation of minimum security measures for digital service providers by ENISA [20]. The document also underlines the need of having a list of authorised users who can access certain security functions including keeping logs from privileged accounts’ usage.

Blockchain technology presents an opportunity to enhance identity management and access control in a secure and decentralised manner. Traditional identity management systems often rely on centralised authorities or intermediaries to verify and authenticate users. This centralised approach introduces vulnerabilities and single points of failure that can be exploited by malicious actors. In contrast, blockchain-based identity solutions, such as SSI, offer a more secure and user-centered approach. With SSI, individuals have control over their personal information and digital identities. For example, a blockchain company Dock utilises SSI technology to allow people to self-manage their digital identities without depending on third-party providers to store and manage the data [42]. The solution, however, still links the users with verifiers (e.g., employers, banks, universities, etc.) to attest the validity of a certain document (e.g., a student has graduated).

Blockchain enables the creation of unique, tamper-resistant digital identities that are associated with cryptographic keys. These identities are stored on the blockchain and can be securely managed by the individual themselvesFootnote 15. This decentralised approach can eliminate some control that centralised identity providers currently have and reduces the risk of unauthorised access or data breaches. Moreover, in the context of AI systems, blockchain-based identity management can be leveraged to control access to AI models and data sources.

  1. 1.

    Users: Users can be selectively granted access permissions to specific AI models or datasets based on predefined rules and smart contracts. This allows for fine-grained access control, ensuring that only authorised individuals or entities can interact with the AI system. Users have the ability to maintain control over their personal data and can choose to disclose only the necessary information to the AI system. This reduces the reliance on third-party data custodians and minimises the exposure of sensitive personal data. Furthermore, the immutability and transparency of blockchain records provide a trustworthy audit trail of identity-related activities. Any changes or updates to identities, access permissions, or transactions can be recorded on the blockchain, enabling accountability and traceability. This can be particularly important in regulated environments or scenarios where compliance with data protection regulations is necessary.

  2. 2.

    Developers: Access control can also refer to the permissions and privileges granted to specific entities (e.g., developers) interacting with an AI system. It aims to protect extraction of sensitive data, prevent unauthorised access and code modification, and maintain the integrity and confidentiality of the system. In the context of AI and cybersecurity, access control involves implementing robust authentication and authorization mechanisms, establishing fine-grained access policies, and enforcing secure roles and privileges. Specific parameters can be incorporated to restrict access to critical systems by leveraging the capabilities of tamper-proof decentralised infrastructure such as blockchains, smart contracts, and oracles. Organisations can define access restrictions and conditions by associating private keys with specific actions or permissions within the AI system. For example, certain critical system operations or sensitive data access can be tied to specific private keys. The blockchain serves as the decentralised infrastructure that securely stores and manages these private keys. Private keys can be securely stored in digital wallets or key management systems, with access controls and encryption mechanisms to prevent unauthorised use or tampering. The blockchain will also record the ownership and transactions related to these private keys, ensuring transparency and accountability, further reinforcing the AI Act standards on transparency.

5 Technology: a complementary tool to achieve legal compliance

In general, the EU position has been in line with implementing “Security-by-design” mechanisms as a way to improve the overall cybersecurity of digital systems. Security-by-design is a concept in software engineering and product design that takes security considerations into account at the early stages of product development (ex-ante). This includes considering security risks and vulnerabilities at every stage of development, from architecture and design to implementation, deployment, and testing [19].

One of the novel things about implementing blockchain for AI is that this technology allows for the introduction of both ex-ante and ex-post measures that can reinforce the overall cyber security of the system. In regard to our example on high-risk AI systems, by storing information on a decentralised and tamper-resistant blockchain, it becomes possible to establish a verifiable and auditable history of an AI system’s development and behaviour. Overall, the idea behind verifiable and immutable time-stamps allows for (ex-post) regulatory measures such as auditing procedures. On the other hand, as an (ex-ante) measure, designing a smart contract based “Access Control” system would require a predetermined set of characteristics to be transposed on-chain.

Furthermore, Mueck et al. [34] maintain that for a proper enforcement of cyber measures (in accordance with the AI Act), there is a need for establishment of AI system architecture that would involve the creation of specific entities (namely: Entity for Record Keeping, Entity for Risk Mitigation, Entity for AI Processing, Entity for AI verification, etc.). The authors argue that the “Entity for Record Keeping” is needed to be in charge of registering and administering the “loggings” of user interactions and their connection with data, storage, and other parts of the system. Similarly, this entity would be in charge of assuring that data was not modified or altered in any way. As suggested, the “AI System Management Entity” would be in charge of managing the interaction between the different entities, detecting any possible issues or undesired behaviour.

While we do not argue against the relevance of establishing suitable regulatory entities in order to reinforce and comply with the cyber measures in the AI Act, we argue that blockchain can serve as a useful tool to a) reinforce the effectiveness of a given entity’s tasks and b) establish a governance mechanism for decision-making between entities. For example, in both of the situations above, blockchain can be of help as it can provide a reliable and verifiable record of the data, detecting any possible alteration. Via this tool the “Entity for Record Keeping” can have trusted information on the data provenance, usage, date and time, etc. In the second case, blockchain can serve as a useful governance mechanism between different entities. In other words, blockchain allows for robust governance by providing a distributed network where multiple entities participate in a consensus allowing for more transparent processes in decisions making. For example, if a malicious behaviour such as data poisoning by an unauthorised party is registered by one supervisory entity, the system can signal to others entities to apply further verification. Similarly, this reduces the ‘single point’ risk when one entity might be hacked/inaccessible. Likewise, via the usage of smart contracts the decisions of all entities would be accounted for and automated within the AI system architecture.

6 Limitations

It is important to note that while blockchain technology offers several advantages, it may not be a suitable solution for all AI-related cyber risks. The implementation of blockchain in AI systems requires careful consideration of factors like scalability, performance, and the specific requirements of the application. Additionally, blockchain technology itself is not immune to all cybersecurity threats as noted by [38] and [41], and proper measures should be taken to secure the underlying infrastructure and smart contracts associated with the blockchain implementation [39].

7 Conclusion

In this article, we argue that blockchain technology offers unique set of properties that can be harnessed to establish transparency, security and enhanced verification in AI systems. As the European Union’s regulatory focus intensifies on cybersecurity challenges related to Artificial Intelligence (AI), in tandem with the AI Act proposal, our objective is to illustrate how blockchain holds the potential to alleviate specific cybersecurity vulnerabilities associated with AI systems. We maintain that the incorporation of blockchain technology can enable specific AI-based systems to align with various provisions delineated in the AI Act. This alignment particularly pertains to aspects such as data, data governance, record-keeping, transparency assurance, and access control enforcement. We show how the decentralised and tamper-resistant attributes of blockchain offer solutions to fulfil these requisites, serving as a promising basis for establishing more secure AI systems. The study also explores how blockchain can successfully address certain attack vectors related to AI systems, such as data poisoning in trained AI models and data sets. The overall goal of this analysis is to contribute to the progress of more secure AI implementations, not only within the EU but also globally. We seek to bridge the divide between legal and technical research by providing an interdisciplinary perspective of cybersecurity in the AI domain. Ultimately, the study aims to provide meaningful insights to aid policymakers in making informed decisions regarding the management of cyber risks associated with AI system.