This book on the topic “Product Development within Artificial Intelligence, Ethics and Legal Risk - Exemplary for Safe Autonomous Vehicles” was prepared by the author on the basis of more than two decades of experience at automobile manufacturers (Volkswagen AG, Audi AG, Daimler AG) within the legal department, product analysis and traffic accident investigation in interaction with research, development until market introduction. The professional experience included the joint development of potential and risk assessments for the evaluation of new automated systems using image recognition with Artificial Intelligence based on results from accident analysis. Further expertise was added to the activities for the worldwide clarification of technical cases of product liability claims with fatal personal injury and property damage. Included was the coordination with authorities and development, the consultation of the responsible lawyers as well as the preparations for depositions in court as a company representative.

As a result of these experiences, a tendency can be seen that future developments increasingly raise the question of whether the manufacturer can be held responsible for damage caused by the technical system. The manufacturer is judged on whether he has done everything reasonable for a safe product after weighing the risks. This requires safety measures which (according to the state of the art in science and technology available at the time the product is placed on the market) are constructively possible and appear suitable and sufficient to prevent damage. If certain risks associated with the use of the product cannot be avoided according to the relevant state of the art in science and technology, it must be examined whether the hazardous product may be introduced into the market at all. This considers the type and extent of the risks, the probability of their occurrence and the benefits associated with the product.

Final inputs for this book resulted from the work for Daimler Research, Development and the Daimler and Benz Foundation in the project “Villa Ladenburg – Autonomous Driving”. During this project, the technical, legal and social aspects of automated driving were investigated.

Using the knowledge resulting from this book, the development of safe automated driving functions is supported, especially with regard to availability, reliability and, above all, risk minimization. Thereby the fulfillment of the valid standards and laws for safety-related product development “between Innovation and Consumer Protection” proves to be a very big challenge for all involved developers. Repeated questions in the author's internal consulting activities within the development departments for safety-relevant and automated vehicle systems at Volkswagen, Audi and Daimler AG confirm these uncertainties. This experience was accompanied by the Audi project management in charge during the preparation of the development guideline “Code of Practice for the Design and Evaluation of Advanced Driver Assistance Systems (ADAS)” with mentoring for the integration and implementation in the VW Group technical specifications. The ADAS Code of Practice definition was prepared in close cooperation with the first drafts of ISO 26262 in the FAKRA Kreis (Facharbeitskreis Automobil). A first meeting of the ISO group took place in 2005 (Ross H–L, 2019). The updated ISO 26262:2018 also refers to the ADAS Code of Practice.

The motivation for this book was the increasing embedding of safety-relevant components with complex electronic and mechatronic vehicle systems as well as man–machine interfaces in new motor vehicles. These new possibilities up to fully automated driving promise time savings due to more homogeneous traffic flow. This reduces the number of traffic jams and obstructions. The time that would otherwise have to be spent at the wheel can now be used for other activities. Furthermore, vehicles can be shared according to the “ridesharing principle” (Lenz B, Fraedrich E, 2016). Several people can be transported at the same time and owning a car is therefore no longer a must, which is why the overall traffic volume becomes less, more sustainable and efficient. Even people without a driving license could drive in a fully automated car. Ultimately, increasing automation of driving functions (apart from the not to be underestimated driving experience of humans) also promises greater road safety as individual, human-related driving errors can be avoided.

Already since the first Benz patent motor car in 1886, individual mobility by motor vehicles has been the subject of controversial discussions, such as environmental or social issues. A sad negative record was achieved in 1970: almost 600.000 injured traffic participants and 21.332 road deaths occurred in Germany alone (Statistisches Bundesamt 2018). Today the automotive industry is confronted with strategic fundamental questions around the world more than ever before, in particular dealing with economic, environmental-friendly and automated driving technologies. Major advances in scientific and technical knowledge are the cause of a fundamental or disruptive change in this sector.

At the beginning of the twentieth century, the Austrian economist Joseph Schumpeter described major extreme changes as “creative destruction”. According to Schumpeter, only by destruction new order can take place (Schumpeter, J. A. 1942 and 2017). The Harvard economist Clayton Christensen described these transitions as “disruptive innovations” that involve shocks and the complete reshaping of industries (Christensen, C. M. 2003). Peter Drucker said that innovation, or entrepreneurship, are disciplines with own fairly simple rules (Drucker P, 2014).

Robots are already replacing drivers in pilot and research projects. Image recognition using Artificial Intelligence (AI), Deep Learning and neural networks allow continuous automation of driving tasks in vehicle guidance up to driverless vehicles. Environment sensors can provide the location (coordinates x, y, z or distance, and angle), the dimension (length, width, height) and speed (longitudinal/transverse or relative) of an object. Artificial Intelligence (AI) refers to the performance of human intelligence by computers. Humans have no problems to recognize objects and to form these observations into a mental model of the world. Through Deep Learning with neuronal networks, a learning method in Artificial Intelligence, vehicles are able to “learn” to understand their environment. Data processing by methods such as “real-time scene labeling” is making significant progress. Further technological development of driver assistance systems with powerful sensor and information technologies are a prerequisite for the steady automation of driving tasks in vehicle control. The former chairman of Daimler's Board of Management Dr. Dieter Zetsche said:

Anyone who only thinks of technology has not yet realized how autonomous driving technology will change our society. The car grows beyond its role as a means of transport and is finally becoming a mobile living space (Daimler AG Media, 2019).

Over the next two decades, in addition to technical and legal challenges, questions of responsibility, tolerances, expectations and the relationship between man and machine will have to be redefined for self-driving cars. The best technology will not be perfect, although it will be more faultless than the human being. In the future, the car will do the same as we do: It will learn every day and thus cope with the complex demands of modern private transport ever better (Ernst & Young Global Limited, 2015).

1.1 Initial situation

To meet consumers expectations, development of automated driving – especially fully automated driving – calls for the management of associated risks. On the one hand, there is pressure to introduce connected automated vehicles in the market hoping for a more efficient, comfortable and safe traffic. On the other hand, the automated system performance should be designed in such a way – based on the predefined framework conditions – that no safety issues will arise.

Probably every driver can still remember the exciting practical driving test: to show the driving examiner – after some driving hours such as motorway, city tour or night trip – that the vehicle can be controlled safely in a collision-free and rule-consistent manner. It was clear that only the subsequent practical experience made the driver a safe driver who could control even challenging traffic situations. Sometimes we learn that safe driving does not necessarily have to be compliant with the rules especially if an evasive maneuver could avoid the impending collision.

The question for the future is: how should vehicles with advanced automated systems including driverless vehicles prove that they can handle a sufficient number of traffic situations safely?

Individual test drives as in the past are certainly not enough. Example numbers of typical test kilometers of a new vehicle approval are according to Daimler AG, a total of more than 12 million test kilometers with the W213 series Mercedes E-class (market introduction 2016). In comparison to that 36 million kilometers were covered in the previous series W212 – a model built from 2009 to 2016 (Maurer, Gerdes, Lenz, Winner, 2016). By means of better simulations and a consequent improvement of the prototypes, it was possible to intensively test in detail from the beginning.

While scientists calculated billions of required test kilometers, solutions with much more support of simulation and further safety verification became necessary. It may be assumed that the number of test kilometers will depend on the number of kilometers driven between two fatal accidents. Following this argumentation and the figures from the German Federal Statistical Office for a motorway pilot, this would mean that 662 million kilometers would have to be tested between two fatal accidents. Under the assumption of other influencing factors, the distance will be extended by a multiple. A number of billions would be needed for such a test, which would still take a long time. The problem is even larger: if you make improvements after a test, the test must be repeated afterwards in order to be on the safe side. This should minimize the risk of accidents to a minimum or, ideally, eliminate it as far as possible such as the following:

Potential safety issues indicated a recall of a so-called “Full Self-Driving System” (NHTSA, 2022), as well as the first of several fatal accidents that occurred in Florida back in 2016. The driving system for longitudinal and lateral assistance from a US car manufacturer called “autopilot” was activated, while the driver watched a Harry Potter video instead of paying attention to traffic. This crash showed the limitations of a level 2 automation system (see Fig. 2.1) in combination with the driver's overreliance in the function which was improperly advertised as an “autopilot” (see Ch. 2).

A first fatal crash in fully automated mode with a safety driver killed a woman while crossing the street when she was pushing her bike 2018 in Tempe, Arizona (see Sec. 4.7.1.2).

We know that acceptance of system performance is variable. Nevertheless, regarding further development of automated systems (based on environmental sensors such as radar, lidar, video etc.), different safety issues for the development and validation become evident for the examples described above.

It is generally assumed that when a vehicle is able to cope with critical situations, it probably can also control simple traffic situations. In particular, one aim is to maximize the proportion of simulation and laboratory bench-based tests in order to integrate comprehensive tests into development processes at a very early stage and to limit the effort on test tracks or in the real-world traffic in a justifiable way.

A further question is: where are the limitations of testing via simulation? This becomes challenging, for example, with the complex sensor technology. It is hardly possible to simulate which signals the individual sensor types still perceive under certain weather or lighting conditions and whether they are able to recognize the surroundings adequately. The fatal accident mentioned above is an example due to the fact that supposedly the camera was blinded by the low sun and could not recognize the crossing truck.

1.2 Objective and Research Questions

Automotive technology must be designed “reasonably safe” and with “duty of care”: If certain risks associated with the use of a product cannot be avoided, it must be assessed whether the dangerous product may be placed on the market at all, considering the risks, the probability of their occurrence and the benefits associated with the product. Vehicles have to be designed within the limits of what is technically possible and economically reasonable—according to the respective current state of the art, state of science, and must enter the market in a suitably sufficient form to prevent damage (German Federal Court of Justice, Bundesgerichtshof, 2009).

A practice-oriented understanding of such requested acceptable risks as a basis for decisions on a safe system design is a prerequisite for the corresponding development process. With regard to these requirements developing safe automated vehicles between innovation and consumer protection leads to a more detailed analysis with the following questions:

  • Which risks are known from accident research? (chapter 2, 3)

  • What will be technical acceptable? (designing complex technology safe, limits of sensor technology or Artificial Intelligence, system safety), (chapter 2, 3, 4)

  • Which benefits can be placed to introduce such systems? (chapter 2, 3, 4)

  • How can accident research be used for a safety (risk) assessment? (chapter 2, 4)

  • How safe is safe enough? (chapter 2, 4, 5)

  • How to prove safety of usage? (fuzzy logic of human factors) (chapter 3, 4)

  • How to prove reliability? (customer satisfaction) (chapter 3, 4, 5)

  • What is legally acceptable? (chapter 4)

  • Which conditions support the development team to develop a safe system? (chapter 4, 5)