Identifying and Mastering Legal Uncertainty Concerning Autonomous Systems

. The level of uncertainty concerning the use of autonomous systems is still very high. This also poses a liability risk for manufactures, which can impede the pace of innovation. Legal uncertainty also contributes to this factor. This paper will discuss existing legal uncertainty. The identiﬁed uncertainty can stem from different sources. Categorizing these sources will be our ﬁrst step when trying to master legal uncertainty. On the basis of these categories, we will be able to evaluate where the focus for mastering legal uncertainty should lie. This approach promises to identify true legal uncertainty, which can only be mastered by new legislation, and separate it from other forms of legal uncertainty which can stem from unclear legal guidelines or uncertainty regarding the application and scope of existing rules and guidelines. Mastering the latter could be possible by specifying said existing rules and guidelines or even by clarifying the scope of their application, a much less drastic solution. Establishing how to deal with different categories of legal uncertainty will then contribute to minimizing liability risks for manufacturers.


Product Compliance
The term product compliance is derived from the general term of compliance and refers to the specific laws and regulations that have to be met when designing and marketing products [1]. The obligations, which are combined under the umbrella term of product compliance vary with product, customer and country in which the product is marketed [2]. For our discussion, we will focus on markets and therefore regulations in Europe and specifically Germany. The underlying premise for any attempt at successful product compliance is to be aware of the applicable regulations and the scope of these rules. In general, these rules will be derived from public law, civil law or criminal law.
The preventative character of public law, forbids the marketing of any unsafe products. Therefore, compliance with these regulations has to be assured in order to even market products. The Product Safety Act is the central legal framework in this context. In severe cases, market surveillance authorities can get involved and order a company to stop marketing a product, or even order a recall of dangerous products.
The rules of civil law are concerned with the compensation for damages. The central regulations are the Product Liability Act and product liability derived from general tort law, paragraph 823 section 1 of the German Civil Code. Liability claims due to defective products cost money and can potentially damage the reputation of the producer, and should therefore be avoided.
The obligations resulting from civil law can be divided into three categories for liability on the bases of the Product Liability Act, and four for product liability derived from general tort law. Producers need to ensure a safe product throughout the stages of design and manufacturing, as well as instruct users, if necessary, on the limitations of the product and possible dangers [3, marg. 12]. For the producer's liability derived from tort law, the producer also needs to ensure effective product monitoring after the product is marketed.
In some extreme cases, criminal law can also become relevant. If, for example, people are severely injured, human failure can be punished with criminal penalties.
Following this brief overview of the relevant regulation, we will now discuss different categories of legal uncertainty. This chosen approach aims at developing more differentiated solutions on how to overcome legal uncertainty. As part of the solution, we will discuss the role an effective product compliance management system can play in mastering this uncertainty and where it has its limitations.

Categories of Legal Uncertainty
Legal uncertainty is not a set expression with a specific meaning attached to it. It is rather open to interpretation. Depending on the level of legal and technical expertise, the term will have slightly different meaning to the practitioner.
Starting from a legal point of view, the sources of legal liability can be manifold. We will also discuss what we call perceived and true legal liability in this article.
What can often result in uncertainty for developers and producers when applying laws and regulation, is the abstract nature of their wording. Since every case needs to be subsumed under the regulation individually, uncertainty can arise concerning the applicability of the regulation. In the case of the Product Safety Act, this can lead to uncertainty concerning the level of safety a product needs to meet, in order to be deemed as a product which is safe for marketing. In the case of civil liability, this can lead to uncertainty concerning the obligations the developer and producer have to meet in order to comply with what is legally expected of them.
In order to make legal regulation more accessible to practitioners, technical standards and norms have been developed. These norms and standards are based on experience and expertise in the field of their application. They are set by private standardization committees and represent the current state of the art. They are an important guideline for practitioners, but mostly not legally binding [4,5]. Nevertheless, the state of the art displayed in these standards serves as a minimum safety level a product needs to provide. Compliance with this minimum safety level indicates that the producer or developer has done everything that can be expected of him in order to achieve a safe product that will not cause harm to any users or third parties [6, marg. 16]. There are some exceptions, as compliance with norms and standards concerning harmonized products results in the assumption that the product is safe [7].
Still, even if technical standards are more detailed and concrete, their development in the standardization committees takes time. Newer and innovative technical possibilities will often not be represented by these standards until they are widespread in the industry. This delay can then again result in legal uncertainty. All of the aforementioned cases of legal uncertainty will be referred to in this article as perceived legal uncertainty. However, this does not constitute any judgment, as this case of uncertainty also has the potential to hinder innovation of developers and producers.
The most problematic source of legal uncertainty occurs when the legal framework is actually not applicable to a certain type of product. This is what we would call true legal uncertainty. In such cases, the current legal regulation is not applicable anymore or is still part of an ongoing scientific discussion [8]. In this article, we will not go into the depth of this debate, but rather try to focus on the solutions, that are currently being discussed.

Legal Framework Is Applicable
In the case that the current legal framework is applicable, we can differentiate between three cases of uncertainty, depending on the combination of the sources of uncertainty. We assume that the generally applicable legal regulations are known.

Uncertainty in Application Concerning Specifics of Legal Obligation
Since the abstract wording of legal regulation can be difficult to apply to individual cases, as already discussed, this can result in uncertainty about the extent of the legal obligations producers and developers need to comply with. As far as technical standards exist, they offer guidance as to the current state of the art on the market. Producers should therefore meet at least the minimum safety level implied by this state of the art technology. This implies that the product is safe to be marketed and therefore interference by market surveillance authorities becomes unlikely. The same goes for possible liability claims, at least if the product proves to be safe during the use phase. Therefore, product monitoring is always an essential obligation that should not be neglected.
This would be the simplest case of perceived legal uncertainty and the standard solution to obtaining certainty in the development process.

Uncertainty in Application Concerning Specifics of Legal Obligation and Uncertainty About What Technical Standards Apply
In some cases, existing standards might not be fitting, or an alternative design choice promises a better and safer product. Developers and producers are not limited by technical standards, but, as they choose to deviate from the suggested procedure, a detailed documentation of the design process and choices becomes more important. It is also worth mentioning, that producers are compelled to choose the design alternative that promises the safest possible product. If solutions proposed by technical standards do not represent, what is feasible with state of the art technology, a different product design alternative has to be chosen [9, 6, marg. 16]. Developers cannot only choose to deviate from technical standards, but might also be obliged to do so.
Although technical standards, for the most part, are not legally binding, noncompliance can come with a certain liability risk. In order to minimize this risk, producers should be able to provide reasoning for the deviation from technical standards. It is recommended, that the design process and especially the selection procedure of the final product design is well documented as part of a product compliance management system. Special attention should also be given to the product monitoring aspect of the compliance management system.
Even if the product then shows to be defective during the use phase, the developer can prove that he has done everything that could be reasonably expected of him to ensure a safe product.

Uncertainty in Application Concerning Specifics of Legal Obligation and no Technical Standards Apply
Especially for the design of innovative and new products, technical standards might not be available because they have not been developed yet. It is worth mentioning that it is not necessary or practical to apply technical standards which were developed for a specific product to a similar appearing product [10]. In these cases, perceived legal uncertainty will naturally be high, as producers cannot rely on technical standards in order to assure, that the marketed product will at least be assumed to be safe.
Although there are no technical standards to offer guidance, producers do not operate in a legal vacuum, since the legal framework is still applicable. The producer needs to prove in ways other than by complying with technical standards that the product is safe. It is advised, that producers therefore implement a risk evaluation as described in the RAPEX-Guideline [11]. Although the RAPEX-Guideline is not legally binding, it at least provides useful information on how to carry out a risk assessment for newly developed products [12].
This risk assessment should also be documented in detail as part of a product compliance management system.

Legal Framework Is not Applicable: True Legal Uncertainty
True legal uncertainty arises when the legal framework is no longer applicable.
The current developments in robotics and AI show, that the legal framework will have to adapt in order to apply to the changing technology [13]. With products becoming more and more complex, connected and "intelligent", the current legal framework will reach its limits.
The limitations mostly derive from the scope of application of the current product liability regime. Legal definitions and concepts that were developed during a time, when software for example still played no significant role. Since then, technology has evolved, whereas the legal framework has not. Some legal concepts, such as the definition of a defective product, which plays a central role in the Product Liability Act, now have to be revised. Currently, a product is deemed defective if it does not provide the level of safety, that can reasonably be expected of the product. The relevant time for this consideration is limited to the marketing of the product. Any responsibilities to monitor the product after this point can only be derived from general tort law, not from the Product Liability Act. With placing the product onto the market, the producer needs to ensure the product is safe. Truly autonomous systems for example, adapt and change during their use phase. This change can hardly be predicted by the manufacturer at the time the product is marketed. It is not clear, how the definition of a defective product should be applied to, or even be applicable to autonomous systems [12]. For truly autonomous products that operate and adapt without human interaction, the current regulation therefore needs to be adjusted, or entirely new regulation may be necessary [14, p. 11; 15]. Even if legislation is adapted, which can be expected, important research and development is done at a much earlier stage. In the interest of having safe products on the market, as well as not hindering innovation done by developers, legal obligations should be clear at the time when innovative products are developed. Addressing the issue of regulation for AI driven systems is an important step towards safe autonomous products in the future.

Solutions
The aforementioned cases of perceived and true legal uncertainty have to be met with different approaches in order to manage said uncertainty.
In the three cases described as perceived uncertainty, the main tool should be implementing an effective product compliance management system. What this can and should entail will be part of the following discussion.
As for the case of true legal uncertainty, a product compliance management system has a limited use. An adaption of regulation or a new regulation is needed to truly master legal uncertainty.
The whitepaper of the European Commission [14] has sparked the discussion about how AI can be regulated in the future. The most prominent feature of the proposal would be the risk-based approach to regulation. Joining the discussion on how this riskbased approach could actually be implemented, the German Data Ethics Commission has published an opinion on the subject [16]. In the following, we will also give an overview as to what this approach implies.

Product Compliance Management System
An effective product compliance management system should be organized in such a way that legal obligations are integrated into the company's procedures and processes [17]. The legal obligations developed by German jurisdiction, as mentioned before, concern product design, manufacturing, instruction of the user and product monitoring. These obligations can be translated into the three categories prevention, detection and response.

Prevention
Producers are faced with the task of preventing harmful products from being marketed and as a result preventing damage to life, body and property of users and third parties [18,19]. Consequently, products need to be safe by design [3, marg. 15]. Tools to aid this task can be approval and release processes, quality control processes for inhouse production chains as well as quality control for incoming parts from suppliers and provisional samples of supplier-parts [20]. In the context of supplier quality control, auditing procedures should be taken seriously. Traceability of products and product parts also plays a role in the task of preventing harm from users. A legal obligation for one aspect of traceability can be found in paragraph 6 section 1 of the Product Safety Act. Consumer products must be labeled with contact information of the producer and information for the identification of the product. In the case of a product recall, the latter helps consumers to identify if their product is affected.

Detection
An effective product compliance management system should be able to detect harmful products on the market, before damage occurs. The obligation to monitor a product during the use phase should also be used for the purpose of gaining useful insights for the potential improvement of the product design. An important requirement for product monitoring is a system for complaint management [21]. Incoming complaints need to be assessed, evaluated and documented. In addition to this passive form of product monitoring, producers also need to actively assess the safety of the product on the market. This can be done by inspecting sample products, checking for insights provided by newer state of the art technology [22] and also by accident statistics, product reviews or user forums on the internet [23].
If the product monitoring detects a potential danger for users or third parties, a response-system should step in.

Response
The response-system should entail a set procedure defining which actions have to be taken in which event. The actions necessary will vary depending on the severity of the potential danger. In some cases, it can be enough to simply inform users of potential risks and how to avoid them [24]. In cases of high risk for life or body of users or third parties, a product recall can be the only acceptable response. It is worth mentioning that market surveillance authorities can intervene if the measures taken by the producer are not efficient enough to prevent potential danger for users or third parties. These measures can go from prohibiting further sales of a product to a product recall.

Risk-Based New Regulation for Artificial Intelligence
The solution for true legal uncertainty can only be an adjustment of the current legal framework, new guidelines for their application or a new regulation. The latter is proposed by the European Commission. The new regulation for artificial intelligence revolves around the idea of a risk-based approach. In the following, systems based on artificial intelligence will also be referred to as autonomous systems.

The General Idea
The idea behind the risk-based approach is to regulate according to the potential dangers that can come with the use of artificial intelligence. This is simply the application of the general principle of proportionality [25].
How to categorize the many possible autonomous systems according to their potential danger for users or others is the main question on which the success of the new regulatory framework depends.

Proposal of the European Commission
In the white paper on artificial intelligence, the European Commission mentions a riskbased approach for a new regulatory framework for artificial intelligence. The risk assessment should be made depending on the application area and purpose of the autonomous system [14, p. 20]. In the white paper, the only differentiation made is between systems and products with "high risk" and "others". In order to achieve a practical solution, the framework should entail a more nuanced inspection of the systems and products to be regulated. The white paper also lacks a proposal on how the risk-based approach should be put into practice.

Opinion of the Data Ethics Commission
The German Data Ethics Commission proposes a much more differentiated risk-based approach for the regulation of artificial intelligence, or algorithmic systems, being the term used in the proposal. A criticality pyramid should be the base for the new regulatory framework. The pyramid is comprised of five categories. Each category represents a level of risk and is assigned different regulatory obligations [16, p. 177].
On the first level, applications with zero or negligible potential for harm are assigned no special measures.
The second level is made up of applications with some potential for harm. The regulatory measures for this risk-level are i.e., transparency obligations, publication of risk assessments or monitoring procedures such as audit procedures, or disclosure obligations towards supervisory bodies.
Level 3 applications are such with regular or significant potential for harm. The proposed measures to regulate these types of applications are, in addition to the measures, which apply to level 2 applications, ex-ante approval procedures. This approval could be implemented by having to license products. As also stated by the Data Ethics Commission, the approval might have to be reviewed and renewed throughout the further development of the product and its life-cycle.
Level 4 applications pose serious potential for harm. The already mentioned obligations for Level 2 and 3 applications could be supplemented by requirements to enable simultaneous oversight of the application by supervisory institutions.
Products and applications which pose an intolerable potential for harm should be totally or partially banned, according to the Data Ethics Commission [16, p. 180].
The opinion of the German Data Ethics Commission proposes a much more concrete and differentiated approach to the idea of risk-based regulation of algorithmic systems. The challenge for the new legal framework will be to define the properties of products and systems that fall into the different categories and are then assigned specific risks and therefore regulatory measures.

Conclusions
In conclusion, we have found, that most legal uncertainty can be minimized with the tools provided by laws or by technical standards. Where uncertainty in the application of law or technical standards arises, an effective product compliance system will help master the remaining uncertainty and therefore minimize liability risk for producers.
In the case of true legal uncertainty, the solution has to come from legislators. Initial proposals for the idea of a risk-based regulation have been presented. The discussion and work on new regulation for artificial intelligence has only just begun. Especially for the definition of product properties and risk categories, more interdisciplinary work should be done. We will be able to see the results of that work in the proposal for the regulation of artificial intelligence of the European Commission, which is awaited with great anticipation.