Keywords

5.1 Introduction

Every new technology raises new legal questions, the answers to which, however, will be similar in a surprising number of cases. This is also true where robots are concerned since a multitude of existing legal provisions, which have been developed and proven themselves over the centuries, can be applied to them. This is true, for example, for the contract of sale and the transfer of ownership. In all likelihood, in the future you must still pay a price for the purchase of a robot, ownership will also then typically only be transferred when the purchase price has been paid in full and there will be warranty rights in the event of a defect. Similarly, the owner will continue to be entitled to handle a robot at his or her discretion, leave him unused or even destroy him, just as one is allowed to do with all other things owned.

Before we turn to the new legal questions raised by the use of robots, it makes sense to take a closer look at this phenomenon, i.e. that surprisingly often, new technology does not require new legal answers. For it is only against the background of this phenomenon that it becomes understandable at which points the use of robots entails new legal questions that cannot be dealt with by the existing norms. As the example of the contract of sale and the transfer of ownership demonstrate, the novelty of the technology alone is not sufficient to prove a need for legal reform.

The ability of existing law to regulate new technological developments is primarily due to its abstract character. The use of abstract terms makes it possible to regulate a wide range of previously unknown issues. Whether one buys a bread roll or a not yet existing quantum computer, decades-old wood or some newly developed material, is irrelevant for the norms of the contract of sale. In legal terms, all these are purchased objects, and robots should not be an exception since they are also objects, at least in accordance with the current legislation. Even if one were to argue that due to their ability to learn they might be compared to animals (e.g. Zech, 2020, p. 66), this would not change anything. For at least under German law, animals are also treated as material objects (cf. section 90a German Civil Code: BGB).

Due to general terms, the use of robots can often be regulated with the existing norms. This applies in particular with regard to contract and tort law. The latter is primarily based on the concept of negligence. Negligence is the central requirement for liability and is generally understood as a violation of the required due diligence, section 276 (2) BGB. What this due diligence consists of in detail depends on a variety of circumstances. These circumstances have not been prescribed by statutory law and may be subject to change, which leaves a wide scope of concretisation for the courts. Therefore, the courts already have the legal instruments to order liability for the use of robots whose technical characteristics are not yet known in detail.

For example, no new law is needed to deem it negligent if robots with unknown tactile abilities are used for the care of people without extensive prior tests. Given the serious risks for life and limb when robots are used in interaction with human beings, it would be also negligent not to install an emergency button or similar mechanism with which they could be quickly and easily switched off (cf. Söbbing, 2019, p. 148). While the technology and the accompanying risks may be new, their legal treatment is still based on the idea of not endangering anyone that has always been a core principle of law (“neminem laedere”). What these duties of due diligence consist of in detail is not decisive here. Rather, it is important to understand that not every new technological phenomenon requires new regulations since its legal assessment can remain unchanged in view of the consistency of the basic regulatory standards.

Nevertheless, new technologies often lead to the adoption of new regulations. The history of technology shows that the legislator rarely relies on the flexibility of existing standards. This was the case with the development of railways (in former Prussia, for example, the legislator enacted a statute as early as in 1838) as well as the spread of the Internet (for German law, e.g. the statute about tele-services in 1997). In some constellations, such changes can indicate where existing law does not adequately cover the new technology. In part, however, such changes are simply due to the fact that the political public frequently underestimates the regulatory power of existing law and wrongly believes that a new phenomenon requires new norms and would otherwise remain unregulated.

Another reason why new norms are frequently provided for new technologies is their origin in political debate. Since such debate only works well if there is a pragmatic objective and the adoption of a law constitutes such an objective, political debate is often focussed on this. This leads to the creation of new laws with a regulatory content that goes hardly beyond existing law. Such legislative changes are less an expression of the need for regulation than of the wish of the political public to come to an understanding about these measures.

In order to discuss the legal challenges resulting from the use of robots, it is necessary to understand what is provided in this regard by existing law. Firstly, this includes an assessment of what existing abstract provisions mean for the use of robots. Secondly, it must be determined whether these provisions contain regulatory gaps which require new norms for the use of robots. Thirdly, it is worthwhile considering what the content of these new regulations should be in order to provide for the use of robots. While the first question is a legal one, the second and third question concern legal ethics. Therefore, they cannot be answered by an analysis of the applicable law alone, but require an answer to an ethical question that ultimately has to be decided politically, i.e. the question under which conditions robots may be used.

Since politics are in turn guided by the views of the population, one should make use of the findings of empirical social research when examining these questions. Surveys alone, of course, cannot determine the content of a future regulation. Rather, this requires legal and legal-ethical arguments, which can, however, build on the findings of empirical social research. The following legal discussion uses therefore the results of the 2019 Delphi presented in this book.

The following discussion will concentrate on the use of robots for consumers since the challenges in this field are particularly important. While the use of industrial robots has been common practice for some time, society has little experience so far with the use of robots in the domestic sector. Companies can be expected to have an expertise in dealing with robots, which cannot be readily assumed for consumers, especially if they need care. Consumers are not able to exchange, shut down, or reprogram robots. A further reason to focus on robots in the domestic sphere is that their use usually involves more personal and thus sensitive data than the use of robots in industry.

Among the issues raised by the use of robots for consumers, liability is of the greatest relevance. Frequently, there are concerns that with the increasing ability of robots to come to unforeseen decisions, human responsibility will end. Therefore, this issue is to be considered first (2.). Subsequently, it will be discussed whether robots can bear responsibility, too. This presupposes that they are treated as legal entities (3.). Regardless of how this question is answered, it has to be considered whether the data created by the use of robots are protected (4.). Further the question is raised whether there should be a right to be treated, at least to a minimum extent, by one’s kind and thus by a natural person (5.). To conclude, it will be of use to consider what expectations there are for legal reforms (6.).

All these questions will be discussed against the background of German law. The legal situation in other European legal systems is likely to be similar as these are all historically in part based on Roman law and at present on EU law. But even if the answers provided by German law should diverge from those provided by other legal systems, they will demonstrate at least the issues that are raised by the use of robots.

In accordance with the ISO standard 8373:2012 2.6 a robot is referred to below as a machine performing movements through electronic control. Computers that merely process and output information, but do not perform movements, are therefore not treated as robots in the following. The same applies to electronic devices, such as refrigerators, telephones, or televisions, that are software-operated, but cannot move without human influence. This differentiation has the advantage that it avoids the difficult question of whether robots can act and decide independently. Even if one denies this, it may be observed that robots can perform movements without an immediate human input.

5.2 Liability

Progress in technology can easily be associated with an increase of risks, if only because the risks of new technologies are not known and thus feared more than familiar risks. Empirically, however, technical progress is generally more likely to lead to a reduction rather than an increase of risks. This is especially true for the use of robots in the domestic sector as this is not associated with the danger of incalculable damage, as in the case of construction of a dam or a nuclear power plant, since only one person or, in the worst case, a few people are affected. The risks associated with the use of robots are therefore only likely to occur from time to time while the associated gains in safety are a general outcome. Once a malfunction has occurred and been observed, robots of the same design can be turned off in order to prevent such damage in future cases.

All in all, the use of robots for consumers is therefore expected to lead to a reduction rather than an increase of risks. For example, a robot used in the care sector could indicate illnesses of a patient or the danger of a heart attack at an early stage and thus increase the safety of patients overall despite new risks. Nevertheless, the question of who is liable in the event of damage remains important. The overall reduction of risks cannot serve as an excuse for damage in the individual case.

Interestingly, the above-described association of new technology with an increase of risks has already resulted in the so-called strict liability being imposed for new technical devices in many other places. This liability differs from fault liability insofar as it does not depend on the culpable actions of individual persons. Such strict liability was provided early on for railways (section 1 Liability Act, HPflG), in later times for aircraft (sections 44, 45 Civil Aviation Act, LuftVG) and car accidents (section 7 Road Traffic Act, StVG). Strict liability also applies for drugs (section 84 Pharmaceutical Products Act, AMG). Such liability does not require proof that a specific person has violated his or her duty of care and is therefore to be blamed for something. In principle, it is sufficient that damage has been caused by the new technology.

On the one hand, this strict liability is based on the consideration that the person who significantly benefits from the use of new technology should also bear the associated risks (Deutsch, 1992, p. 74). On the other hand, those who are exposed to the new dangers related to the technology are to be protected. Moreover, incentives are to be set to invest into safety at an early stage. All this ultimately has the effect that people are significantly better protected against damages caused by technical products than against accidents caused by human error. Humans remain the greatest risk.

Of utmost relevance for the use of robots is the existing product liability, which is structured as strict liability (Deutsch, 1992, p. 73). Accordingly, liability arises if a defective product causes the death of a person, injury to the body or health of a person, or damage to an item of property, section 1 (1) Product Liability Act, ProdHaftG. A product is defective if it is constructed in such a way that its use can harm others. If damage is caused by a robot, the widespread hindsight bias (cf. Fischhoff, 1975, p. 288 ff) contributes to the assumption that different programming would have prevented the damage and that the robot is therefore defective. A technical failure is hardly ever classified as an unavoidable stroke of fate and thus acceptable. If, for example, a robot drops a person to be cared for, it will generally be assumed that the programming or the mechanics of that robot has been defective. It is not necessary to prove that a specific programmer or designer could have recognised this. Rather it is factually sufficient to show that a differently programmed or constructed robot would not have caused such damage.

Nevertheless, there is no product liability if a defect could not be detected at the time of sale in accordance with the state of scientific and technical knowledge, section 1 (2) no. 5 ProdHaftG. However, this is difficult to prove for the producer since technical expertise can usually show later on that the damage was caused by an error that could have been avoided. For completely new and unforeseen scientific phenomena are not the kind of events that occur in the use of new technology. This is particularly true for the use of robots since the risks associated with them are primarily related to the laws of mechanics and the employed program code. The laws of mechanics are well researched so that an error can hardly be traced by back to an inadequate knowledge at the time of sale. The same ultimately applies to a program code. It is a human creation which is not inevitable and could have been created differently. If any damage occurs during the use of robots, this is unlikely to be classified as unavoidable and accordingly, an exception to the otherwise applicable liability cannot be considered. At most, liability gaps could arise if the software of a third party, which is not liable as the producer, is installed on the robot after it has been put into operation. For such, as yet hypothetical cases, an extension of product liability would provide a solution (European Commission, 2020, p. 14).

These considerations apply in particular to the use of robots in the domestic sector. Unlike robots used in industry, such robots regularly come into physical contact with human beings. This requires the use of the so-called soft robots, which are constructed to be more sensitive and submissive to human behaviour (Haddadin & Knobbe, 2020, p. 28). For care robots can cause damage through malfunction during physical contact, for example, through too intensive massaging or too much pressure on the patient. Since even a serious suspicion of dangers to life and limb entails a special duty of care (BGHZ 80, 186, 192), the producers of robots are exposed to considerable liability risks. This is all the more true since criminal liability may apply in parallel to civil liability for the damages that have occurred. A producer who places an unsafe robot on the market may end up paying not only for damages arising during its use but can also be punished for negligent physical injury or even negligent homicide. It is therefore to be expected that care robots are only brought on the market after extensive tests have been carried out and proved their use to be relatively safe.

Precautions against conceivable damages correspond to widely held expectations, as the Delphi has shown. Those questioned tended to rate it as possible up to “quite probable” that ethics guidelines of the European Commission call for damage prevention, i.e. that AI systems should neither cause nor aggravate damage (Fig. 5.1 and Table 5.1).

Fig. 5.1
A box plot depicts the probability of E U 4 ethics guidelines for trustworthy A I. The autonomy guideline has the highest probability, and the explainability guideline has the lowest probability.

Prediction about EU-Ethics guidelines for trustworthy AI

Table 5.1 Predictions about EU-Ethics guidelines for trustworthy AI

The duty to take precautions against damages is of the greatest relevance when it comes to the question of who is liable in the event of its violation. As a starting point, it is important to realise that these may be several different persons at the same time. The fact that the producer of a robot is liable for the damage caused by it does not exclude the liability of others. Contrary to a widespread view among laypersons, it is not necessary to decide whether either the producer or the programmer or the seller or even the robot is liable for the damage. As in many other cases of contractual or statutory liability, there may be a so-called joint and several liability, under which each injuring party is liable for the entire damage, section 426 BGB. This may, therefore, include all the persons mentioned above. Who bears which share is then determined in the internal relationship of the injuring parties. This is of little importance for the injured party, as he or she can choose who to claim damages from and to what extent.

If a robot causes damage to the health or property of a consumer, according to section 1 (1) Product Liability Act, at least the producer is liable for damages. This is based on Art. 1 of European Directive 85/374/EEC and therefore similar in all Member States of the EU. Liability arises irrespective of where the producer is located and whether he is the one who made the decision leading to the damage. Thus, even if the robot is seen to have caused the damage itself, the producer remains responsible as this does not call into question the fact that he or she manufactured the robot. It is not likely that this will change in the future although the development and dissemination of robots would be promoted if the liability of their producers was to be restricted. For the already mentioned conviction remains that producers profit from the sales of robots and that their use is associated with enormous risks.

The liability of the producer is complemented by the liability of the person under whose trademark the robot is distributed as he or she is also treated as the producer, section 4 (1) sentence 2 ProdHaftG. The person who is considered to be the producer in commercial transactions can therefore not exonerate himself or herself with the fact that the robot was manufactured by someone else. The same applies to a person who brings a robot onto the European market. Therefore, the responsibility cannot be delegated to a person from a non-European country where liability can hardly be enforced.

In addition to this producer’s liability, there is fault-based tort liability, which can be based on all actions leading to damage and may therefore apply not only to the producer, but also the distributor and the seller as well as other persons or institutions involved in the usage of robots; for example, a care home. On the producer’s side, mainly four types of errors lead to liability. Firstly, there are construction errors (BGH, NJW 1990, 906, 907), where liability arises if the planning of a robot has not sufficiently taken into account all risks. This would be, for example, the case if no emergency button or similar safety mechanism to switch off the robot had been provided. Planning would also be inadequate if a robot was unable to process the information that a human is standing in its way, and this would result in a collision.

Secondly, liability arises from manufacturing errors, which occur when a defect occurs in its production. This is especially the case if construction plans have been inadequately implemented, for example, because the defectiveness of the material was overlooked. Thirdly, liability arises where lacking or faulty instructions lead to damages (BGHZ 116, 60, 72–73).

Fourthly, inadequate product monitoring also leads to liability (BGHZ 99, 167, 171–172; NJW-RR 1995, 342, 343). This is based on the obligation to monitor whether any errors have occurred during the use of a product, especially if its technic is complex. In order to fulfil this obligation, producers can contact maintenance workshops, ascertain through purchaser surveys whether any problems have occurred, or follow reports in the press and on the Internet. This obligation to monitor products is only lowered if products have been on the market for a long time which is currently not the case for robots in the domestic sector.

As has been emphasised above, such a liability of the producer does not exclude the liability of the person who has sold or operates the robot. The seller and operators of the robot are therefore liable if, due to similar cases, they should have known that it can cause damages. In contrast, they are not liable under current law if damages suddenly occur and could not have been predicted by them. It has been proposed by some that the operator’s liability should also be strict (Expert Group, 2019, p. 39). In addition, contractual liability may apply if a robot’s use is based on a contract. This is, for example, the case in special-care homes. In general, this liability is also fault-based.

Irrespective of the type of liability, contributory negligence might decrease the amount of compensation. This is the case if a fault of the injured party has contributed to the occurrence of the damage, section 254 BGB. This is of particular importance for the liability of robots as their movements might also depend on how they are treated by their users. If a user instructs the robot to apply more pressure on her or his body, any damage occurring at a later stage might be caused by the robot having been taught this behaviour as normal. This does not place the responsibility on the user to teach the robot correct behaviour. However, it exempts the producer from liability for damages if it is apparent that these have been caused solely by incorrect use. Nevertheless, it should be noted that the producer of a product is also expected to take into account the possibility of incorrect use and can therefore not exonerate himself or herself with the fact that the user was warned against a certain use. In the example of pressure being applied on the body, an obvious precaution would be to limit the intensity of this pressure to a certain level regardless of the preferences of the user.

As these cases show liability depends on abstract concepts such as negligence and fault, which require concretisation with regard to the specific use of a robot. As has been observed above, this is a considerable advantage, on the one hand, since it allows for a flexible approach of the law to new technologies. On the other hand, it means that the drawing of specific boundaries will be left to the courts. Interestingly, this also corresponds to the expectations of those questioned in the Delphi, who considered it most likely among the various scenarios that in 2030 the “clarification of liability issues in self-learning, autonomously acting AI systems is now up to the highest German court” (Table 5.2).Footnote 1

Table 5.2 Delphi scenarios of ethical and legal challenges

This expectation that liability will be clarified by the courts exceeds the expectation that “requests from government agencies to be allowed to hack, without a justified reason, information from AI systems for surveillance and prevention purposes have always been rejected by the courts”. In addition to the expected need to clarify the legal details, this indicates an expectation that the use of robots is associated with a potential for damage and that questions of liability will therefore have to be resolved. It remains unclear, however, whether respondents were aware that this would primarily concern the details of liability and less the fundamental question of whether liability arises at all.

Another interesting aspect of the respondents’ expectations is their comparative uncertainty as to whether ethics committees in 2030 will be seriously concerned with the question: “Is it still appropriate to legally view robots as a thing and not as a creature to be endowed with personal rights when they live in a common household with people”? This uncertainty about the legal status of robots is well founded insofar as the latter is in general irrelevant for liability. Even if robots were treated as legal entities, as will be discussed below, this would not exclude the liability of the producer and the seller. Accordingly, it is not to be expected for future practice that liability for a robot will depend on its treatment as a legal person.

5.3 Legal Personhood

It would be a legal revolution if robots would be treated as separate legal entities. This question has therefore caused a wide debate (Solum, 1992, p. 1231 ff; Balkin, 2015, p. 55 ff; Ebers, 2020, p. 99). A starting point for it is the observation that, already today, robots are used for tasks which humans do not want to or are unable to perform (Balkin, 2015, p. 59). An example for this is software which detects melanoma more reliably than experienced physicians (Brinker et al., 2019, p. 47 ff). Robots seem thus to make decisions that were previously the responsibility of humans while acting in ways that humans find difficult to understand. Should they therefore legally be treated as persons?

Against this the objection suggests itself that robots are programmed and constructed by humans and, as artefacts, lack the ability to reproduce, which characterises living beings. However, this attribute is not decisive for the question of the legal personality of robots for two reasons. Firstly, it cannot be excluded, at least in theory, that robots in turn construct other robots (von Neumann, 1966, p. 79). Secondly, it is not evident why legal capacity should depend on the ability to reproduce. Independently upon this capacity humans have a legal status as persons because they are intrinsically valuable.

In addition to human beings, however, the law also treats other entities as legal persons, among them public limited companies and associations. None of these are natural persons as they lack essential human characteristics such as being able to act on their own. They always have to be represented by others. Nevertheless, they have rights and obligations and can therefore sue and be sued in courts. The fact that they are represented by human beings does not exclude their legal capacity any more than the legal capacity of an infant is called into question by the fact that it is represented by its parents in court.

Should robots therefore be represented by humans and treated as legal entities? Or would this call into question the dignity of human beings since the law would then grant robots the same rights? At least the respondents of the Delphi believe it to be probable that human self-determination will have to be protected in view of the advent of robots (Fig. 5.1). Interestingly, they are even more certain in this respect than as to whether ethical guidelines should be established in order to encourage damage prevention.

While human autonomy appears to be threatened in its exclusivity when there are other legal entities besides human beings, it may theoretically also be at risk if robots lack this quality. If they take over a multitude of decisions from humans and thus shape reality, it becomes important to defend oneself against their “actions” if those infringe one’s liberty or property. Then legal personhood and thus the capacity of being sued could arguably help. However, this would only be necessary if, unlike under current law, there were no other responsible parties against whom a claim could be made (2).

Since robots differ from humans in central characteristics such as origin, sentience, and ability to develop, one might consider attributing legal personhood to them if they resembled legal persons. Such persons are characterised by the fact that they exist independently of their members or shareholders as well as objects belonging to them. In extreme cases, there may be legal entities such as the assetless association, which have no property and for which no people work. Legal persons are thus independent of their human founders and the material objects belonging to them. Such independence does not exist in the case of robots. They are programmed by humans and equipped with hardware. Accordingly, they remain objects that depend upon their material substance, although they exist independently of their developers and operators to some extent (Borges, 2018, p. 978).

The dependency upon its material substance is particularly obvious in the fact that the existence of a robot can be terminated at any time by its destruction. This is different in the case of a legal person, which comes into being and ceases to exist only by a decision of the legal system. If the material objects belonging to a legal person are destroyed, this legal person does not cease to exist.

According to current law robots lack the ability to have rights and obligations, as do all other material objects and every animal, irrespective of any other properties they may possess. Therefore, even if robots had completely different technical properties, such as the ability to develop further and the ability to reproduce, they would not automatically be legal persons. The decisive question is therefore not a legal, but a legal-ethical one, namely whether robots should have their own rights and obligations. Technically, this is possible, just as some legal systems have already granted rights to animals and rivers, e.g. to the river Río Atrato at the transition to the Central American landmass (Talbot-Jones, 2021, p. 208). The question therefore is whether there are good reasons for recognising the legal capacity of robots. In the case of human beings, legal capacity is ultimately based on their intrinsic value, i.e. they deserve protection for their own sake. This does not apply in the case of robots, regardless of their level of technical development. Among other things, this is because they lack consciousness and are therefore unable to set themselves a purpose and experience the world in a conscious way. Rather, they are subject to the programmer’s specifications, for example, in the question of what is to be learned (Zech, 2020, p. 42).

Being guided by human objectives, robots act for the benefit of a third party. It is not conceivable how this benefit of a third party could be promoted by robots being granted their own legal personality. Rather, it appears that the best approach to ensure this is to treat robots as objects that their owners may dispose of at will, section 903 BGB. People benefit from this either directly, if they own the robot, or indirectly, if they are a member or shareholder of a legal entity that owns the robot. This ensures that the legal system ultimately only promotes human interests. Denying legal capacity to robots ensures that this remains so. The state of technical development is not important in this context, since the ultimate reason for legal capacity is not a specific technical capability, but the promotion of human interest. The discussion on how the law should treat robots is therefore focused on liability, not on possible robot rights. Interestingly, this corresponds to the expectations of the Delphi respondents who do not tend to anticipate that ethics committee will be concerned with the question of whether a robot is a “creature to be endowed with personal rights” (Fig. 5.2).

Fig. 5.2
A box plot depicts the probability of 8 Delphi scenarios of ethical and legal challenges of society. The liability scenario has the highest probability, and the creature scenario has the lowest probability.

Delphi scenarios of ethical and legal challenges

The question of whether robots should be able to appear in court as plaintiffs also shows that it is not their technical capability which matters here. For it is not relevant whether they would be able to formulate and substantiate a legal claim, which seems at least conceivable with the appropriate development of software. Rather, the question is whether this would promote human interests. This is the case when human beings themselves act as plaintiffs and formulate their claims. The same applies if a legal person acts as plaintiff as it represents human beings or furthers their interest by enforcing its rights. Therefore, recognising the ability of legal persons to act as plaintiffs in court ultimately promotes human interests while it is not evident that the recognition of a corresponding capacity of robots could promote human interests at all.

These difficulties associated with treating robots as legal persons have led the European Parliament (European Parliament, 2017, no. 59–60) and some authors to argue that they should be granted partial legal capacity (Teubner, 2018, p. 204 f; Schirmer, 2016, p. 663; Specht & Herold, 2018, p. 43). This would include the positive allocation of the rights and obligations required to deal with the digital entity in legal transactions and exclude others. For example, robots could be liable, but would not be able to form a limited company. Other authors have rejected this proposal (Expert Group, 2019, p. 38; Riehm, 2020, p. 47; Linke, 2021, p. 203; cf. www.robotics-openletter.eu), and with good reason, since the “actions” of robots can always be attributed to the producer or the operator. Therefore, there is a comprehensive liability regime (2.). Due to strict product liability and the parallel responsibility of different actors, it is not likely that there will be gaps in liability with regard to the use of robots. For instance, if a care robot injures a patient during treatment, the patient would therefore be sufficiently protected.

However, other authors assume a “responsibility gap” between the action of robots and civil liability (Teubner, 2018, p. 157 ff). This is said to arise because robots are supposedly able to act on their own authority and cause damage to the rights or legal interests of others. In this perspective, the human being in the background could not be accused of breaching a duty of care. Therefore, robots themselves would have to be held responsible. As the argument goes, the recognition of a partial legal capacity would avoid the unilateral passing-on of risks to the injured party.

However, firstly, this argument fails to recognise that fault does not only occur where an action directly leads to damage. Once a robot is deployed, it may no longer be possible for the designer or the seller to control its movements. In the case of self-learning systems, it may indeed be unpredictable how they behave. Liability, however, may already arise from the decision to use such an unpredictable machine. It would be negligent not to provide safety precautions such as a switch-off button or a corresponding code word (“Siri, stop!”) to prevent damage. Secondly, product liability does not depend on fault in any case and a liability gap is therefore not plausible.

Also, it is inconceivable how a robot could have recoverable assets. If a robot causes significant damage, it is questionable whether it can still be used and has a monetary value because hardly anyone would be prepared to pay for its acquisition. Similarly, even a recognition of partial legal capacity does not ensure that robots have sufficient property. The insolvency risk in the event of damage is therefore enormous (Ebers, 2020, p. 102). If robots had significant assets and did not belong to any particular person, others would be allowed to appropriate them, section 958 (1) BGB. This could only be prevented if robots were recognised as something that deserves protection for its own sake. However, as shown above, there is no reason for this.

In German private law, the concept of partial legal capacity is not altogether unknown, but nevertheless an alien element. A frequently cited example is the unborn human being (nasciturus) who can inherit and thus establish own rights (section 1923 (2) BGB; Mayinger, 2017, pp. 179 f). However, the recognition of the capacity to inherit only serves to bridge the time between conception and birth and is aimed at the human to be born, not at the nasciturus’ own interests. The nasciturus’s capacity to inherit is not related to any significant liability either. The representatives of the nasciturus may disclaim the inheritance and thus release her or him from inherited debts (sections 1942 ff BGB) so that there is little risk of a financial burden. An analogy to the nasciturus is therefore not helpful with regard to the use of robots which are supposed to have duties.

If the objective of partial legal capacity is not the additional liability of robots, but an exemption from the liability of the owner, a liability privilege for the owner or operator would be sufficient. A new legal construct of partial legal capacity of robots is not required in this regard. Such a privilege would more clearly express the purpose of relieving owners. However, this effect shows the doubtful nature of such a privilege, since it is hardly understandable why the owners or operators of a robot should be exempted from the risks associated with its use while retaining the profits resulting from it. It is implausible why uninvolved third parties should bear the risks arising from an accidental meeting with a robot.

To conclude, the legal personhood of robots may inspire the imagination and generate new legal ideas. However, it cannot fix a problem of the current law. In this respect, the discussion of legal personhood for robots resembles science fiction literature, which is also inspiring, but not a reliable source for information about physics, technology, or law.

5.4 Data Protection

The handling of the personal data that is continuously collected by robots also raises pressing legal questions. Care robots, for example, analyse the environment via cameras and sensors, and register the intentions of the persons cared for (Steinrötter, 2020, p. 336). Such robots store detailed data about those requiring care; for example, data on their state of health or personal secrets entrusted to them. This includes first of all health data: who takes which medication, in what dose and how frequently? Who has undiagnosed high blood pressure? In addition to this, robots communicate with those in need of care. They can already perform simple communication tasks today, for example, via integrated speech recognition software (Steinrötter, 2020, p. 336).

Due to the extent and the sensitivity of the collected data, two fundamental rights are legally relevant, which have been developed over time by the Constitutional Court: the right to informational self-determination (BVerfGE 65, 1, 43) and the fundamental right to the confidentiality and integrity of information technology systems (BVerfG, NJW 2008, 822, 824). The former is of particular relevance for the processing of data. It is regulated by the binding provisions of the General Data Protection Regulation (GDPR) and is in turn flanked by the right to privacy of Art. 7 para. 1 of the Charter of Human Rights of the European Union and Art. 8 para. 1 of the European Convention on Human Rights.

The central question with regard to data protection in the use of robots is to whom the data processing can be attributed. If this is the data subject herself or himself, there are no restrictions set by data protection law. A robot that one has purchased or rented for one’s personal use and the actions of which one can determine, can therefore also collect sensitive health data, for example, by measuring blood pressure or by taking photos.

However, this is not the case if the data processing is attributed to other persons, in particular if a care home, operates the robot and reads out and processes its data. In such cases, the data processing always requires justification, which may be provided by consent, Art. 6 lit. a) GDPR. If health data are concerned, consent must be expressed or data processing must be necessary for the provision of medical care, Art. 9 para. 2 lit. a), h) GDPR. The capacity to give consent is problematic, for example, when patients suffering from dementia or severe psychosis are concerned. In this case, consent can be provided by a guardian, or a living will made in advance (Steinrötter, 2020, p. 339).

If no effective consent of the data subject or his or her guardian can be established, processing may nevertheless be permitted. Firstly, this applies if the processing is a prerequisite for treatment in healthcare, Art. 9 para. 2 lit. h) GDPR. Secondly, data has to be stored in order to fulfil the obligation of documentation under tort law, which also serves to avert future damage.

Provided that no health data are concerned, data processing is also permitted according to Art. 6 lit. b) GDPR if required for the fulfilment of a contract. A robot is therefore permitted to record and process all data that promote the purposes of a contract, such as enabling communication, which opens up wide possibilities for data processing. Consequently, only a few cases are conceivable in which handling the collected data would clearly not be necessary according to the purpose of the contract. For example, the operator of a care robot would not be permitted to use the robot to collect data on any criminal offences committed by the patient if these were unrelated to the objective of communication.

In view of this variety of options to enable data processing, it is important to protect the collected data from external access by third parties who at first glance have nothing to do with the use of the robots. This applies in particular for the state accessing data for security reasons and for purposes of criminal prosecution.

The legal situation can be illustrated with the example of a person in need of care confessing the murder of his wife to a robot. Can the police and the department of public prosecution access the data if the offender was in full possession of his mental powers when confessing? In their collection of evidence, they have to make an important distinction, which is already laid down in the decisions of the Constitutional Court on a diary (BVerfGE 80, 367 ff) and of the Federal Court of Justice on self-talk (BGHSt 57, 71 ff). In both cases, the suspect had disclosed details of a crime committed by him. In the self-talk case, the suspect was sitting on his own in the car while being monitored by the law-enforcing authorities by means of technical devices without his knowledge on the basis of section 100f German Code of Criminal Procedure, StPO. He spoke to himself uttering compromising words, which later identified him as the perpetrator in the charged murder case.

In the diary case, the accused was suspected of beating a woman to death. He had hidden records, similar to a diary, in the house of his parents. These included indications of his problematic relationship with women, which the court regarded as incriminating evidence. While the Constitutional Court judged the diary to be admissible evidence (BVerfGE 80, 367, 376), the Federal Court of Justice rejected this in the self-talk case by assuming an independent prohibition of such a use of evidence (BGHSt 57, 71, 74).

The difference cannot consist in the disclosure of private information as such. Secrets may also be found in a diary. The author of a diary generally does not want others to read his or her intimate thoughts. Rather the decisive factor is the circumstance of feeling unobserved. The driver of a car without passengers may generally trust that no one is listening to him. The human dignity, as guaranteed by Art. 1 Basic Law, protects this personal space from the law enforcement authorities, even though this may impede the investigation of criminal offences (BGHSt 57, 71, 75). The author of the diary, in contrast, has to expect that someone might get hold of the written record (cf. BVerfGE 80, 367, 376), even if it is on the occasion of a house search. Unlike the spoken word, the written word is not transient.

This distinction can be used for the collection of robot data for purposes of criminal prosecution: If the person concerned had to expect the collection of her or his personal data, there is no general prohibition of data processing since the most personal sphere of life then is not affected. However, if the person concerned did not have to expect that his or her data of his most personal sphere would be collected, the constellation is similar to that of self-talk, the recording of which may not be used.

The robot might possibly be a welcome interlocutor. But the user should not in vain assume that her or his spoken word will remain transient and not be recorded for posterity. As far as the most personal sphere is concerned, section 100d (1) StPO explicitly requires that the personal data will not be used by the law enforcement authorities. If an attempt to deceive is involved because the care robot is falsely labelled as defective, for example, the collection of evidence is prohibited according to section 136a (3) StPO.

However, for data outside the most private sphere a robot may be accessed without the knowledge of its user, if he or she is suspected of a particularly serious crime (e.g. murder, aggravated robbery) and the course of events or the whereabouts of the accused cannot be established otherwise or only with great difficulty, section 100b StPO. These provisions show that the accessing of robot data is subject to considerable, though not insurmountable legal restrictions.

The law thus provides some protection against access to a robot’s records, as this requires at least a justification by law or explicit consent. Against this background, the participants of the Delphi are surprisingly certain that an intervention in artificial intelligence systems will not take place without such a justification (Fig. 5.2). Whether this expectation is confirmed will not only depend on the applicable law, but also on its consistent implementation.

5.5 Right to Human Contact

People react to robots with a certain sympathy and affection if they are designed to be humanoid. This might result in a reality that is a horror scenario for the vast majority of people: a care home where a multitude of robots move around, but not a single human being, except for the people to be cared for. In such cases, those in need of care would be even more likely to treat robots as persons because of the lack of human contacts.

Such a scenario of “being alone among robots” raises the question of whether this may be compatible with the guarantee of human dignity provided by Art. 1 para. 1 GG. Because of their social nature, human beings should not be forced to spend their existence in total isolation (Stöger, 2020, p. 136 f). They rely on communication with their fellow human beings (European Parliament, 2017, no. 32). These requirements do not only prohibit the state to isolate people. Rather, the state must also actively protect human dignity, Art. 1 para. 1 sentence 2 GG. This includes actions to prevent such a situation in which people are only surrounded by robots. Insofar, there is right to a minimum of human contact.

This right prevents an unrestricted technicalisation of care. In particular people in need of care who, due to their lack of mobility, can hardly get into contact with others, have to be treated in a way that allows for a minimum of human contact. Robots cannot altogether replace human carers (Deutscher Ethikrat, 2020, p. 51) since they lack the empathy to put themselves in the situation of a person requiring care. Nevertheless, they can provide an important service in the care sector.

The right to a minimum of human contact does not exclude the use of robots in many areas of care and for other domestic tasks, if only because an essential aspect of care and domestic work consists in addressing hygienic and physical, but not communicative needs. A cleaner is not primarily expected to be entertaining or communicating. Accordingly, there is no constitutional guarantee that all domestic or care work will be undertaken by human beings and that there will be extensive human contact. It is primarily a question of political and private decisions how services are provided. Therefore, it depends very much on the resources that private individuals and the society are prepared to use for care. Only very few requirements are provided by the constitution in this respect.

5.6 Challenges for Law and Ethics

If one considers the various legal and ethical challenges once again, it becomes apparent that the use of robots in the domestic and the care sector is already regulated by a large number of provisions. At least with regard to the fundamental decisions of the legal system for extensive strict liability in the use of technology, the rejection of legal personhood for robots, and the protection of personal data in their use, a fundamental legal reform does not seem to be necessary. This does not exclude revisions of some details, such as those currently discussed at the suggestion of the European Commission (European Commission, 2020). This includes, in particular, an explicit liability of robot operators (Zech, 2020, pp. 81, 101) and the introduction of a compulsory insurance system (European Parliament, 2017, no. 57–59). Such changes can be initiated by the legislator. In many cases, however, it will be left to the courts to clarify the details, as in other areas, by defining concrete requirements such as liability for negligence or defective products on the basis of the abstract provisions.

This concretisation by the courts corresponds to the expectations of the Delphi respondents insofar as they expect court proceedings with considerable certainty both for the decision on a person’s creditworthiness when applying for a loan and for the calculation of the risks associated with a person’s lifestyle by the insurance industry (Fig. 5.2). As a “principal component analysis” shows the answers to both questions can be traced back to a considerable extent to a common factor (PC1) (Table 5.3):

Table 5.3 Principal component analysis of the ratings in Fig. 5.2

It seems fair to assume that the PC1 factor expresses the willingness to have legal issues clarified in court if significant economic consequences depend on this. This is firstly the case with decisions on creditworthiness since the credit instalments to be paid by a borrower depend on the standards applied. Therefore, if the courts prohibit the consideration of certain circumstances—such as a conviction that has already been erased from the criminal record—this can have a significant economic impact on the borrower.

Secondly, the same applies for the expectation examined in the Delphi as to whether the processing of data on the lifestyle of the insured party will be subject to legal proceedings in the future. This also has considerable economic consequences, namely the amount of insurance premiums to be paid. Accordingly, it may be worthwhile to have the courts review what data insurance companies are allowed to use. It is conceivable, for example, that courts may prohibit insurance companies from negatively considering the policyholder’s contact with convicted criminals in his or her own family when calculating insurance premiums as it would make the rehabilitation of criminal offenders more difficult if even their own relatives were to avoid them. It is therefore not surprising that this question regarding the assessment of recreational behaviour is judged similarly to that of a person’s creditworthiness. Both questions involve issues of economic significance, which cannot be clarified by the legislator but only by the courts.

Another interesting coincidence in the respondents’ answers becomes visible when two further questions are assessed. The first is whether ethics committees will in future be confronted with the question of whether robots are still treated as objects and not as creatures endowed with personal rights. The second question is whether the deletion of data on termination of a robot lease, which has so far been common practice, will meet with ethical concerns in the future. Both scenarios are characterised by a deviation from ethical principles that have been considered mostly plausible up to now. In the first case, this is the treatment of robots as objects, in the second, the systematic processing of personal data.

The apparent scepticism towards these scenarios might therefore be based in both cases on the assumption that ethical principles, unlike technology, hardly change. This assumption could be indicated by a principal component PC2. If one considers the topicality of debates on justice, which have been led since ancient times, the assumption appears justified. As much as robots revolutionise technology and require the adaptation of the details of legal provisions, they are not likely to change legal and ethical principles.