1 The pieces of the artificial intelligence liability jigsaw puzzle

With two proposals adopted on 28 September 2022, the EU Commission has crystallised an ambitious initiativeFootnote 1 to assess the adequacy of liability rules in the digital age and adapt them, where necessary, so as to embrace emerging, transformative and disruptive digital technologies, in particular, artificial intelligence systems (AI systems). By testing the adaptability of liability rules, revealing gaps and unfit solutions in legacy regimes in facing digital challenges, in calibrating existing legal rules to accommodate the distinctive features of second-generation technologies, and in pondering possible policy options, the Commission has striven to solve the artificial intelligence liability jigsaw puzzle. Thus, the Proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligenceFootnote 2 (the draft Artificial Intelligence Liability Directive) and the Proposal for a directive of the European Parliament and of the Council on liability for defective productsFootnote 3 (the draft Revised Product Liability Directive) are key pieces of the artificial intelligence liability puzzle. Their role in solving the puzzle has to be assessed in exploring their interplay with other instruments and their effectiveness in fulfilling the policy goals globally pursued by the (revised) legal system as a whole.

The process of assessing the adequacy of liability rules to the digital age (primarily, to artificial intelligence systems) was triggered by a core concern. Should the liability system reveal insufficiencies, or flaws and gaps in dealing with damages caused by digital technologies, victims can remain uncompensated or, at least, only partly compensated. The social impact of potential inadequacy in existing legal regimes to address new risks created by artificial intelligence might then compromise the expected benefits. Indeed, the results of a behavioral study show that the

“perceived likelihood of receiving compensation in the event of damage caused by artificial intelligence applications shapes not only the degree of trust consumers have in artificial intelligence applications, but also the societal acceptance of those applications and consumers’ likelihood to buy or to use artificial intelligence-driven products or services: the higher the perceived likelihood, the higher the levels of acceptance, trust and, implicitly, the willingness to take up such products or services.”Footnote 4

Moreover, certain factors such as the pervasive penetration of technologies into all aspects of social life, and the multiplying effect of automation can aggravate the magnitude of the damage caused by artificial intelligence. Damage can easily become viral and propagate rapidly in a densely-interconnected society. Hence, it has been critical to ensure that existing liability rules succeed in providing the same level of protection to any victim independently of the technology involved in the causing of the damage.

1.1 Policy quandaries and policy options: the artificial intelligence liability jigsaw puzzle

There were several policy quandaries to face and solve. First, the formulation of artificial intelligence-specific liability rules or the accommodation of artificial intelligence specificities into general liability rules. Secondly, advocacy for a strict liability regime in the event of damage caused by artificial intelligence systems or the preservation of a fault-based approach as the primary liability model for default. Thirdly, deciding on the level of uniformity and legal harmonisation to be reached at EU level for damage caused by artificial intelligence systems.

The European Parliament Resolution on liability for the operation of artificial intelligence systemsFootnote 5 of October 2020, containing a set of recommendations for a Regulation of the European Parliament and of the Council on civil liability for damage caused by the operation of artificial intelligence systems, embodied a very ambitious and radical positioning in solving the three above-described policy quandaries. It put forward a proposal for an artificial intelligence-specific liability regime in two layers (strict liability for high-risk systems and fault-based liability for systems other than high-risk ones) with a high level of harmonisation at EU level provided by the adoption of a Regulation. This proposal has not been adopted. Instead, the Commission proposed, employing a substantially different approach, the above-mentioned tandem of draft Directives aimed to revise the Defective Product Liability rules so as to accommodate artificial intelligence-enabled products and to alleviate the burden of proof in fault-based liability scenarios under national laws on damages caused by artificial intelligence systems. In departing from the Parliament’s proposal of 2020, the Commission takes a clear position on the three policy dilemmas regarding the solution of the artificial intelligence liability puzzle.

The Commission’s approach is less drastic and much less forceful in proposing an artificial intelligence-specific approach. Despite the telling name of the proposed directive - On adapting non-contractual civil liability rules to artificial intelligence (Artificial Intelligence Liability Directive) − it is not actually a directive providing for liability rules for artificial intelligence. The aim and the expected effect are more modest, and, certainly, more pragmatic and realistic in terms of lawmaking: providing for common rules on the disclosure of evidence and the burden of proof in non-contractual fault-based civil law claim for damages caused by an artificial intelligence system.

Interestingly, the proposed directive builds a bridge between the (future) Artificial Intelligence ActFootnote 6 and national laws on fault-based liability. While the Artificial Intelligence Act, as a regulation, is a decisive attempt at high harmonisation at EU level, non-contractual fault-based liability rules are essentially national, and, therefore, hardly unified. Whereas the Artificial Intelligence Act is based on a purely regulatory approach, the Artificial Intelligence Liability Directive intends to fill the ‘redress gap’ in the Artificial Intelligence Act and enhance the enforcement of regulatory requirements for high-risk artificial intelligence system by invigorating their role for the purposes of non-contractual fault-based liability claims. Thus, the failure to comply with such requirements triggers the alleviation of the burden of proof, one of the identified weaknesses of the legacy liability regime to be addressed, by establishing a set of rebuttable presumptions. Unlike the Parliament’s approach in the Resolution of 2020, the risk-based categorisation of the artificial intelligence system does not trigger strict liability but contributes to presumptions. The list-based approach adopted by the Parliament’s proposal to classify high-risk artificial intelligence systems was not necessarily linked with the Artificial Intelligence Act, while the Commission, in the Artificial Intelligence Liability Directive, coherently builds the bridge by relying on the risk classification of the Artificial Intelligence Act.

As for the harmonisation-level dilemma, the Commission renounces attaining maximum harmonisation with the adoption of a Regulation on liability rules. On the contrary, a complex and delicate scheme of interactions between the Artificial Intelligence Act, the new proposed Directives, and the national rules on non-contractual fault-based liability is designed. The Commission builds two bridges that create a dense framework of liability rules for artificial intelligence, albeit one not totally consolidated at the EU level: a bridge between the Artificial Intelligence Act and national fault-based liability rules; and a complementariness bridge between the product liability regime and fault-based liability rules.

Thus, the artificial intelligence liability puzzle is resolved accordingly. The policy options articulated by the described proposals do not ensure that the puzzle produces a fully harmonised image for artificial intelligence liability, insofar as the liability system is still highly dependent upon national laws on fault-based liability. Nonetheless, there are new pieces that pragmatically signal a desired and awaited trend towards an increasing unification of liability regimes.

In this context, the revision of the Product Liability Directive plays a key role with a harmonising potential that is likely to exceed the traditional, formal effect of a Directive. By enlarging the scope of the Product Liability Directive to embrace artificial intelligence-enabled goods and artificial intelligence systems, and accommodating some rules to the characteristics of such systems, the harmonisation potential of the Product Liability Directive is reinforced, expanded, and leveraged: even if this involves proceeding by way of a directive, it is one with an express full harmonisation clause (Art. 3 Revised PLD).Footnote 7

The EU piece in the liability puzzle is enlarged and placed in the center. The revision of the Product Liability Directive thus plays a fundamental role in the efforts to efficiently resolve the artificial intelligence liability jigsaw puzzle.

1.2 The possibilities and the limitations of the Product Liability Directive

In deciding to focus on the Product Liability DirectiveFootnote 8 as the solution to the artificial intelligence challenges to liability rules and to use the revision of the Directive as the most realistic way to harmonise to the full extent possible artificial intelligence liability, a conceptual quandary had to be untwined. To enlarge the scope of the Product Liability Directive so as to embrace artificial intelligence systems entails the broadening of the concept of a ‘product’. How far can the concept of a ‘product’ be stretched without being denatured? How much stress can the product liability regime withstand to accommodate artificial intelligence without altering its foundations?

The decision to subject the Product Liability Directive to a revision with a scope-expanding effect leverages in a fundamental way the harmonising potential of such a policy option, given the accepted role of the Product Liability Directive as an EU enactment on liability. In a way, by expanding the Product Liability Directive to embrace artificial intelligence systems instead of altering the focus to an alternative artificial intelligence-specific legislative initiative, the Commission is betting on reliance upon a harmonising instrument that is already accepted by the industry and adopted by national legislators. The revision of the Product Liability Directive reinforces and enlarges the scope of EU rules in a harmonised remit that is now widely unquestioned. Non-contractual fault-based liability is still largely national, but Member States have already left to EU rules the product liability regime. Ensuring that this widely-accepted model, by legislators and by industry, continues working properly in facing the digital era should be less contentious than recalibrating the EU-national footprint on the liability scene.

The revision of the Product Liability Directive so as to accommodate artificial intelligence systems and other challenges of the digital age could not have been either a mere act of will, or a simple effort to interpreting existing provisions to force them to embrace digital products. There were conceptual and practical hurdles to get over. Explicit and clear solutions were needed and expected by the market to ensure certainties and enhance predictability. Hence, the revision of the Product Liability Directive entails a process of terminological clarification (of such terms as ‘product’ and ‘defectiveness’) and conceptual acknowledgement of artificial intelligence systems, as well as the addition of new rules and the incorporation of artificial intelligence-specific considerations.

Therefore, the revision had to start from the core of the product liability system: the concept of product and the appreciation of defectiveness. Since its adoption in 1985, the Product Liability Directive has provided for a definition, for the purposes of the Directive, of ‘product’Footnote 9 (a definition which was slightly amended in 1999Footnote 10), accordingly determining its sphere of application. The meaning, and the extent of this central concept of ‘product’Footnote 11 to cover upcoming market and technological developments is decisive in assessing the versatility of the rules and the full compliance of policy goals in the long term. Furthermore, the conception of ‘product’ – its definition, judicial interpretation and application - pervades the entirety of defective product liability machinery and determines, directly or indirectly the meaning, extent and operation of its other component parts (notions such as ‘defect’, ‘producer’, as well as issues such as defences and causation, as well as the idea of putting a product into circulation).

Hence, challenges posed by the progress of technology and market developments question the core conceptual element of the defective product liability system, but also extend over other elements of the system. The emergence and expanding penetration of artificial intelligence-enabled goods, smart products, and artificial intelligence systems in the market are timely catalysts of profound reflection on the review (or re-reading) of the Product Liability Directive through digital lenses. Artificial intelligence-enabled products exceed the practical and conceptual perimeters of the concept of ‘products’ as conceived, devised, and constructed in 1985,Footnote 12 and expanded and updated in subsequent years. Despite the clear aspiration of the Product Liability Directive to produce adaptive rules, the concept of product had a recognisable industrial and post-industrial flavour.

Artificial intelligence-enabled products and artificial intelligence systems embody all the distinctive characteristics of the digital economy and the most disruptive features of the second generation of digital transformation.Footnote 13 The advent and pervasive proliferation of smart products in modern life means that ‘products’ have transformed into data-based artificial intelligence-driven complex ecosystems of interconnected devices. As a matter of fact, the contemporary economy produces, relies, and is based on ‘opened products’ (more precisely, ‘opened product ecosystems’) that interact with the environment, grow as new devices and components are incorporated, evolve upon being updated and upgraded, and constitute an indistinguishable melting together of services and products. The conundrum is then whether the product liability regime can embrace an artificial intelligence system as a ‘product’, without distorting the latter concept’s rationale and denaturing its essence.

2 Major changes in the Product Liability Directive to embrace artificial intelligence: an assessment

2.1 Product and component: the radical game changers in the revision

The stylised definition of ‘product’ in its original wording (Article 2 PLD) covers all movables, including those incorporated into, affixed, or associated with another movable or an immovable. It is a broad and malleable definition that effectively encompasses the varied typology of products stemming from industry and from market innovation. It is indeed comprehensive and reasonably future-proof, but is still somehow corseted by an industrial logic. Smart products and indeed even software thus challenge not only the concept of ‘product’ in the Product Liability Directive, but also and fundamentally the foundations of the distinctions between products and services,Footnote 14 between assets and dataFootnote 15 and between objects and subjects in the modern economy, and consequently in the formulation and the application of legal rules. That is precisely the reason why smart products invite a dauntless revisiting of the conceptual basis and the policy vectors underpinning the Product Liability Directive.

Artificial intelligence-enabled products challenge legacy product liability system because they blur the line distinguishing products from services,Footnote 16 overflow the contours of products as single units in transforming them into complex ecosystems, that evolve throughout their life cycle by being updated and upgraded, by being fed by data and by interacting with the environment as if they were, metaphorically, ‘quasi-living beings’.Footnote 17 These are the disruptive aspects of artificial intelligence-enabled products that perfectly capture the specific characteristics of emerging digital technologies,Footnote 18 guiding the reflection exercise led by the EU to assess the adequacy of liability legal frameworksFootnote 19 and its ability to accommodate digital innovation.Footnote 20

Accordingly, the decision in the revised Product Liability Directive to amend the definition of ‘product’ and explicitly include ‘software’ within it is critical and to be welcomed. But even more telling is the express clarification in the explanatory note (p. 6) and in Recital 12 that artificial intelligence systems and artificial intelligence-enabled products are ‘products’ for the purposes of the Product Liability Directive. In consequence, artificial intelligence system providers (as per the Artificial Intelligence Act) will be regarded as manufacturers. This blunt drafting solution, by explicitly mentioning ‘software’ in the definition of product, appreciably clarifies, even if, unfortunately, as in any drafting amendment, some uncertainties are alleviated while other new ones are created. Thus, as noted by the European Law Institute,Footnote 21 it remains unclear whether other digital content that may be functionally equivalent to software is included as a product despite not executing specific tasks on its own, as well as whether or not SaaS (software-as-a-service) is also included given that from the perspective of the victim no distinction in the commercialisation model (provided as a standalone product or under a subscription agreement) would seem to be relevant in ensuring compensation.

Going beyond the expanding meaning given to the concept of ‘product’, the amended definition of ‘component’ is certainly much more revolutionary and enticing. Under the revised text, ‘component’ means any item, whether tangible or intangible, or any related service, that is integrated into, or inter-connected with, a product by the manufacturer of that product or within that manufacturer’s control. This definition, together with the following definition of ‘related service’ – ‘a digital service that is integrated into, or inter-connected with, a product in such a way that its absence would prevent the product from performing one or more of its functions’ – is fraught with elements emerging from the artificial intelligence-driven paradigm shift: interconnection, integration, performance of functions, and, undoubtedly, related service. ‘Services’ conquer the ‘product’ terrain. And this conquest is rendered visible, explicit, revealing of the blurring of the conceptual boundaries.

Furthermore, in the definition of ‘component’, another fundamental element in the new logic underlying the revision emerges: the manufacturer’s control. It is a factor that helps the revised Product Liability Directive to re-gauge the extent of a manufacturer’s liability in a market of evolving, learning (artificial intelligence-enabled) products.

2.2 Recontextualising defectiveness: factors to consider

Complexity, opacity, increasing autonomy, openness, vulnerability, and data dependency are also distinctive features of artificial intelligence-powered systems. They are present with varying intensity in the constellation of artificial intelligence systems and artificial intelligence-enabled products. As a matter of fact, complexity, opacity or autonomy are matter-of-scale factors. The extent to which these distinctive features require a profound redefinition of existing rules and principles, or, on the contrary, simply shake the pillars of the legal system without compromising their stability and durability merits pondering upon.

The revision of the Product Liability Directive has recognised these challenging features and addressed their impact on the existing rules with different solutions.

The drafting of Article 6 on defectiveness evinces clear attention being paid to the specificities of digital products and artificial intelligence systems. The solution has been to extend the list of factors to be considered in the assessment of the defectiveness of a product. The new factors included in the list will notably improve the current wording of the (slim) Article 7 PLD. These factors patently reveal an intention to focus attention on artificial intelligence features. That is specially telling in, inter alia, “(a) (…) instructions for installation, use and maintenance”; “(c) the effect on the product of any ability to continue to learn after deployment”; “(d) the effect on the product of other products that can reasonably be expected to be used together with the product”; “(f) product safety requirements, including safety- relevant cybersecurity requirements”; or “(h) the specific expectations of the end-users for whom the product is intended”.

This drafting solution is to be welcomed and succeeds in aptly enhancing clarity and providing guidance in the assessing of defectiveness where artificial intelligence-enabled products are concerned.

With the same unhidden intention to tackle artificial intelligence-related challenges, the second paragraph of Article 6 addresses the particularities of digital products as ‘quasi-living creatures’ that evolve in the market, learn, and are updated and upgraded, by clarifying that ‘a product shall not be considered defective for the sole reason that a better product, including updates or upgrades to a product, is already or subsequently placed on the market or put into service’.

2.3 Alleviating the burden of proof

It has been convincingly statedFootnote 22 that opacity, autonomy, and complexity challenge the equilibrium of interests underlying the current distribution of the burden of proof in the Product Liability Directive (Article 4), as well as in any compensation claim for damages. The injured party will encounter significant difficulties in proving a defect and the causal relationship between a defect and damage in the face of highly opaque, complex artificial intelligence-driven decision-making. Mere transparency might not succeed in enhancing the position of the victim due to the complexity of the underlying algorithmic bases if these are not clarified by an effective explanation of the reasons, the decision-making path, and possible critical deviations or biases. Likewise, the efforts needed to collect relevant evidence, trace actions throughout the causation chain, and gather data may be fruitless, be dissuasive, or be unaffordable without the cooperation of the actors involved in the operation of the technological ecosystem.

As it has been previously mentioned, with the proposal of the Artificial Intelligence Liability Directive, the Commission aims to address one of the more visible friction points in accommodating damage caused by artificial intelligence systems in traditional non-contractual fault-based liability rules, namely the disclosure of evidence and the burden of proof.

The Product Liability Directive model required an amendment to rebalance the resulting asymmetry as well.

In acknowledging the asymmetry that the above-described disruptive features are likely to provoke to the detriment of an injured party, the legal logic underpinning the defective product liability regime should be revisited. Shifting, alleviating, or lowering the standard of the burden of proof in favour of the weaker party are reasonable responses for the purpose of remedying the imbalance.

Articles 8 and 9 of the revised Product Liability Directive combine two of the methods aimed at facilitating proof by the victim: rules on disclosure of evidence (Art. 8) and rebuttable presumptions (Art. 9). Thus, the Product Liability Directive also benefits from techniques intended to alleviate the burden of proof like those incorporated in the Artificial Intelligence Liability Directive.

2.4 Readjusting defences

Two characteristics of artificial intelligence-enabled products are particularly disconcerting and challenging for traditional product liability logic: data-dependency and openness.

The ability of an artificial intelligence system to perform actions and take decisions primarily depends upon static and dynamic data processed by the system: personalised settings by the user, direct instructions, data collected by connected IoT devices, predictions, data provided by authorised oracles or by third parties. Artificial intelligence-enabled products are highly data-dependant. Consequently, the availability, accuracy, sufficiency, and reliability of the data inputted into the system are critical performance factors for an artificial intelligence system. The defective functioning of an artificial intelligence-enabled product may be caused by ‘defective information’ (i.e., false or inaccurate information).Footnote 23 That impacts on the applicable liability rules but also affects the traditional role of the manufacturer in controlling the product. In fact, the role of the manufacturer dilutes in a multitude of other actors contributing to the design, functioning, and use of the artificial intelligence-enabled product, but at the same time it prolongs beyond the point in time when the product is placed on the market. The product is not a ‘finished product’, at least in the classical meaning. On the contrary, it is an ‘opened, unfinished product’ that is fed by multiple data flows, personalised by the user, and ‘trained’ in the course of its operation on the basis of self-learning capabilities.

Furthermore, unlike traditional products, smart products are enriched by updates, additions, and upgrades throughout their life cycle and after their circulation. Updates and upgrades may be delivered at different times in relation to different interconnected devices. Respective manufacturers can react asymmetrically in providing updates, releasing security patches or solving vulnerabilities. And the proactive cooperation of the user or the operator may be required to complete or render effective their implementation. All these considerations challenge the adequacy of the ‘put-into-circulation’ moment in an artificial intelligence-enabled product’ universe. This factor is relevant in assessing defectiveness in the light of the safety expectations of the average consumer,Footnote 24 as has been discussed above, and in relation to certain defences and limitations of liability.

Putting a product into circulation does not draw the end line of the producer’s oversight and his/her interaction with the product. This relates to the reasonableness of the later-defect defence, and the adequacy of a development risk defence in a world of ‘opened’ smart products. Indeed, the need for preserving a development risk defence is questioned, but in the revision, such a radical amendment has not been proposed.

These challenging issues have not gone unnoticed in the revision of the Product Liability Directive. In Article 6 (e), explicitly included in the list of factors to be considered for the defectiveness assessment is ‘the moment in time when the product was placed on the market or put into service or, where the manufacturer retains control over the product after that moment, the moment in time when the product left the control of the manufacturer’. This idea of the manufacturer’s control plays a key role in the reconfiguration of the later-defect defence. With an undoubted reference to the particular openness and ‘quasi-living’ nature of artificial intelligence systems, in particular, the second paragraph of Article 10 in the revised Product Liability Directive excludes from liability exemption those situations where the defectiveness of the product, even if it comes into being after the moment when the product was placed on the market, is due to a related service, software updates or upgrades, or precisely the lack of software updates or upgrades that are necessary to maintain safety. This limitation applies to the extent that such defect-triggering factors are within the manufacturer’s control.

As anticipated, the manufacturer’s control plays a role in the new operational and conceptual logic of the product liability regime. As per Article 4 (Revised PLD), manufacturer’s control ‘means that the manufacturer of a product authorises (a) the integration, inter-connection or supply by a third party of a component including software updates or upgrades, or (b) the modification of the product’. It is the calibrator between the old logic of the later-defect defence and the new logic of opened, learning products.

3 The revised product liability regime: a key piece in the artificial intelligence liability puzzle

The revision of the Product Liability Directive, together with the proposal of the Artificial Intelligence Liability Directive, embodies a fundamental policy decision in addressing and solving the artificial intelligence liability puzzle. Thus, the Commission has opted for rewarding the tested effectiveness of the product liability system after decades of operation in the market and for invigorating the harmonisation potential of the Product Liability Directive. The policy strategy to deal with liability in respect of artificial intelligence thus revolves to a significant extent around a modernised, upgraded, and ambitiously expanded Product Liability Directive.

It is a key piece in the artificial intelligence liability puzzle. But its role has to be assessed in relation to the other components of this increasingly dense legal framework for loss caused by artificial intelligence systems. So, the revised Product Liability Directive operates in a triangular structure in a complementary interplay with national fault-based liability rules (partially and modestly harmonised by way of the Artificial Intelligence Liability Directive) and in conjunction with a skilfully built bridge between the (future) Artificial Intelligence Act, which lacks a satisfactory redress mechanism, and liability rules operating through a set of rebuttable presumptions.