Keywords

1 Methodology for Scenario-Based Analysis

The scenario-based approach mentioned in Chap. 4 is carried out comprehensively for the realistic and relevant situations of demonstration use case “intersection” in the small town of Hallein in the region of Salzburg in Austria. The investigated area comprises several complex controlled intersections in a sequence with some roundabouts, reaching from the highway exit through the city to some logistic root and end nodes. All these traffic nodes contain mixed traffic under complex topological conditions with various conflicting situations of individual vehicles, trucks, bicycles, pedestrians, powered two-wheelers and public transport. The given traffic conditions are complex enough to inherit several critical issues and circumstances, which may lead to complicated collapses of traffic in the area if interest.

1.1 Traffic Detection

First, one needs a clear understanding and overview about the real traffic situations and scenarios in detail. Three different approaches are used and combined here. In the first instance, a system with video cameras is installed for traffic detection in the selected area of interest. The cameras detect, classify and make anonymous all traffic participants by deep learning methods and precisely evaluate their trajectories as illustrated in Fig. 7.1. A detailed analysis of the traffic situation in Hallein, Austria, is shown in Chap. 10.

Fig. 7.1
figure 1

Traffic observation and tracking of traffic participants with video and deep learning algorithms

Hereby, the trajectories of all traffic participants are evaluated and merged consistently for all the consecutive traffic nodes of the controlled intersections and roundabouts. These trajectories are later used for the detection, interpretation and prediction of the traffic situations.

In addition to the video cameras, also lidar sensors are used for traffic detection. Such measurements can be seen in Fig. 7.2.

Fig. 7.2
figure 2

Traffic detection with lidar sensors

In comparison with the video detection, the lidar sensors are more robust with respect to light and weather conditions. Object detection and classification are simpler than with videos from the algorithmic point of view. Making data anonymous is not an issue here, because objects cannot be detected at a personal level. On the other hand, these sensors are much more expensive than simple video cameras. Hence, the sensors are used for detailed analysis of selected street sections only.

A further source for detecting and classifying the traffic situations is floating car data, like they are provided from different vendors and sources. Figure 7.3 illustrates the application of machine learning-based clustering algorithms for the identification of the relevant traffic situations. Hereby snapshots from floating car data are taken periodically over a relevant time range and area of interest. The floating car data is then clustered and classified with unsupervised machine learning algorithms. The shown map is generated automatically and summarises all traffic situations according to the representation of the floating car data, which really occurred in the time and region of investigation. Beside the comprehensive identification and representation of the really occurring traffic situations, these are not only clustered and listed but also arranged with respect to similarity. Similar traffic situations are neighbours in the shown catalogue of traffic situations.

Fig. 7.3
figure 3

Detection and classification of traffic situations by floating car data: similar situations are grouped in clusters in the catalogue of traffic situations (left)

1.2 Naturalistic Driving and Field Operational Tests

The traffic detection described in Sect. 7.1.1 is accompanied by naturalistic driving data, delivering additional data for the validation of the trajectories from the vehicle’s internal view, tracking some data from the vehicle bus and further sensors for the driver behaviours and vehicle surroundings. While the data from Sect. 7.1.1 corresponds to the Eulerian specification from the perspective of flow dynamics, the data evaluated here in Sect. 7.1.2 corresponds to the Lagrangian specification.

Lagrangian data is primarily used for the development of proper behavioural model of drivers and arbitrary traffic participants, which requires that external objects must be identifiable by the vehicle sensors. One of the behavioural aspects that can be modelled and calibrated by using the naturalistic driving data is reaction times and interactive behavioural patterns. For example, when some vehicles are waiting at a traffic light, the first vehicle does typically not start moving immediately when the traffic light changes to green, but only after a certain reaction time, and the second vehicle does not start moving as soon as the vehicle in front initiates the movement. As a consequence, delay times accumulate, slightly reducing intersection capacities. By analysing the underlying naturalistic driving data, it is possible to model such reaction times in a stochastic way and use these models inside simulation models for improved accuracy and realism.

Another relevant aspect is the vehicle following behaviour, which describes the reaction in terms of speed variation of a vehicle in lateral direction with respect to the movement of the vehicles ahead. It influences the distances between vehicles depending on the speed and driver characteristics. Following behaviours have a noticeable impact on the capacity of the intersection, since they affect the maximum density of traffic in the street which is still sufficiently safe.

Fig. 7.4
figure 4

Vehicle tracks from naturalistic driving studies

A further aspect puts focus on the street sections and statistical variations of driving behaviours with respect to the spatial context. Figure 7.4 shows examples of three individual vehicle tracks from different drives in the investigated area. One easily can see the different speed profiles, the vehicles where driving in the given section around the selected roundabout. The speed profiles of course correspond to the topology of the street sections and the spatial surroundings, the traffic situation and driver characteristics. In the investigated area, significantly more than 1000 individual trips with different drivers driving two specially equipped cars have been used for the calibration of stochastic driver models.

1.3 Traffic Modelling

The traffic situations as well as the proper models for driver behaviours are incorporated into agent-based micro-simulation models for the whole area under investigation. In Sect. 9.1.1, such a model of a specific intersection in Hallein is introduced. The traffic models are calibrated and validated intensively and carefully with the data from the traffic detection in Sect. 7.1.1 as well as the single vehicle tracks from the naturalistic driving studies from Sect. 7.1.2. Special care is taken in the consideration of interactive behaviours of the different traffic participants. The traffic light signal coordination plans are also included in the model, along with all relevant traffic signs and priority rules.

As part of the validation, the key performance indicators that result from the simulations, such as travel times, vehicle counts and others, are compared with the various sensors measurements. The traffic model is then steadily and iteratively refined until no significant discrepancy between simulation and reality remains. The performance indicators must be analysed in a stochastic way, since micro-simulation traffic models (same as traffic in general) are not deterministic. It is furthermore necessary to vary the initial and boundary conditions of the simulation. As a result, a massive database of simulation results is made available, which offers a very detailed understanding of how different control strategies and actions affect the performance indicators under different traffic situations, including numerical conflicts between different key performance indicators. This is further described in Sect. 7.1.4.

One further advantage of the comprehensively validated micro-simulation traffic models is that the whole detailed information of each agent (cars as well as vulnerable road users) is available, including positions, speed and many other features for each time stamp. That makes it possible to find out which data aggregation and filtering mechanisms work better for each control strategy. Additionally, it allows to evaluate how reliable the aggregated floating car data is depending on how many vehicles provide their information to the data aggregation service. Confidence intervals for the aggregated floating car data can also be calibrated by analysing micro-simulation results.

1.4 Development of Functions by Scenario Management

Having valid numerical models for arbitrary conditions is a core element of the scenario-based development and validation procedure. Once having valid and trustworthy simulation models, not only the given control strategies but also various alternative vehicle and traffic control strategies and actions are carried out in the simulations.

Namely numerous different variants of the scenarios are produced with the help of Monte Carlo simulation approaches. Different aspects are subject to such variations. An obvious example is traffic situations, which can be described using the inflows to the system, the routing decisions or origin–destination matrix and the vehicle composition. Pedestrian flows are of course an important component of the traffic situation. Environmental conditions can be varied too. The interactive behaviours of traffic participants are also subject to variations, including reaction times, longitudinal behaviour and car-following models, lane changing behaviour, overtaking and many others. Dynamic components of the infrastructure are also varied, including traffic light signal plans and V2I/I2V messages. Even static components of the infrastructure can be subject to variation, including aspects such as the number of lanes, new road markings and others.

These massive variations are executed and simulated on top of existing reference control algorithms for vehicle and traffic control. Traffic regulations and laws are considered as being static. In principle, these could be changed in a similar way, too. The reference controls are traditional static traffic control coordination plans in combination with manually driven vehicles under consideration of their driving behaviours, taken from Sect. 7.1.2.

A comprehensive set of relevant fundamental manoeuvres is identified, thoroughly studied and automated via trajectory optimisation and platoon control techniques. One of those is the individual extension of green times for the platoon in order to allow platoons to pass the intersection completely and not being split by an unlucky green time phasing. Coordinated drive-away at the start of the green phase has also been investigated and assessed as a control strategy able to minimise the accumulated reaction time discussed in Sect. 7.1.2. A further strategy which has been evaluated is the temporal and spatial increase of platooning densities for an optimised passage of a given fixed green time phase. Such a strategy allows platoons to pass the intersection without splitting and to improve the capacity of the intersection. Other control strategies that have been evaluated are the energy optimised approach of intersection at red light phase as well as the prioritisation of certain traffic participants (like public transport or emergency vehicles).

These control actions required implementing the relevant aspects of V2X communication into the simulation models, too. In the case of platooning, such control strategies may include the selection of situation-aware distance gaps, for example. Other strategies include using the I2V information from the traffic lights about remaining time until the next green or red phase to optimise the speeds and spacing of the platoon members, or, inversely, using V2I information from the platoon to adapt the signal plan in order to give priority to the platoon, if so desired.

Safely managing cut-in-, cut-through- and unexpected braking manoeuvres have been implemented and tested in vehicle dynamics simulations, too. The aspects of safe automated driving have been addressed by state-of-the-art and novel scientific results including safety-extended predictive platoon control and measures to obtain string stability, an essential dynamic property in efficient and safe automated driving systems.

1.5 Evaluation and Analysis of Key Performance Indicators (KPIs)

For the simulations of the various control strategies and actions, the resulting key performance indicators (KPIs) are specified, evaluated and assessed. These do not only include traditional indicators like travel time losses, as shown in Fig. 7.5, flow capacities, etc., but also more sophisticated parameters like capacity reserves, collision risks, congestion risks, complexity of the traffic situation, robustness, duration of the traffic disturbance caused by an event, such as a platoon being given some additional green time to cross the intersection, etc.

Fig. 7.5
figure 5

Exemplary evaluation and comparison of travel and loss times for certain routes with reference solution and an alternative control strategy in the investigated area of interest

Some of these KPIs can be derived and measured directly from single simulations or measurements. Others, like collision risks and danger, congestion risk, flow capacity reserves, criticality rates and some others cannot be measured directly, neither in simulation nor in the real world. For such KPIs, virtual sensors are used, which have been developed and applied with special multi-layered stochastic simulation methods. Their description exceeds the scope of this exposition by far and therefore will be disclosed by subsequent publications.

Robustness is another important property for the rating of control algorithms, which needs to be inherited in system and control algorithm design imminently. For example, the traffic loss time of the nominal reference solution significantly scatters due to the natural variations. These can be seen in the box plots of Fig. 7.5, which show the comparison of travel times for the reference solution on the left and for an alternative, improved control solution on the right. Beside the reduction of travel times, also the scatter of the travel times could be reduced significantly, expressing increased robustness of the system performances.

Complexity is another relevant property of traffic situations. The novel concept of traffic complexity rating has been first introduced in [1]. It allows to make decisions such as not allowing specific control strategies, when the resulting traffic situation becomes too complex. In general, complexity correlates with the loss of resilience of the traffic control system and can be used as a design criterion to avoid non-robust control strategies. A high traffic complexity rating is a good indicator for congestion risk.

The corresponding information design helps in a concerted selection of the best and situation-aware control strategies and actions. There are typically conflicts between different key performance indicators. Therefore, it is necessary to focus on all the relevant and available indicators at once. A typical example is that a longer green phase for one direction improves the traffic flow in that direction but makes it worse in the perpendicular direction at a signalised crossing.

Fig. 7.6
figure 6

Comparison of resulting network traffic performance as a result of applying alternative control strategies (right)

Figure 7.6 graphically summarises the effect of a given control strategy on the whole system, coding the resulting aggregated speeds for each street section using a colour code, like it is known from various floating car data representations.

It has to be noted that multiple complementary simulation environments with varying granularity, simulation scope and depth ranging from traffic flow micro-simulations to detail vehicle control and vehicle dynamics simulations are utilised to cover the multiple aspects and scopes of the posed research questions, that is, the different performance indicators. For example, a research question on potential benefits and risks of utilising platoon control to realise coordinated passing of a signalled intersection needs to be answered broadly:

  • Traffic simulations allow statements on the traffic flow effects in the surrounding traffic network links.

  • Detailed platoon vehicle control/vehicle dynamics simulations are needed to assess the response characteristics of such manoeuvres in the presence of mixed/individual traffic and to verify the behaviour of situation-aware platooning control.

  • Measurement data and derived behavioural models must be analysed to obtain valid model parameters and assumptions for any simulation study.

Finally, dedicated tests on real roads—from testing grounds to open-road tests are needed to back up, extend or challenge the findings and to allow iteration loops to refine research questions, tools and methods, as well as data support and close the research feedback loop.

1.6 Adaptation and Learning

One of the reasons for the complexity and the difficulties of traffic control and management is the fact that human traffic participants are adapting to different traffic situations and traffic control strategies dynamically and sometimes unexpectedly. In such cases, the capability of quick adaption to new traffic situations is the only available resolution. Therefore, after application and implementation of the improved traffic and vehicle controls, their effect is not only evaluated in simulations under given, certain assumptions. The control strategies and their implementations must be assessed in the real environment as soon as possible, according to Sect. 7.1.1. Changes in traffic participants’ behaviours or the change in the underlying assumptions due to new control strategies are captured hereby quickly and re-integrated in the scenario catalogue, enabling a controlled, iterative enhancement and improvement for the steady adaption to new situations and the realisation of cooperative learning. This way, the cycle for continuous development and (self-)adaptive learning of complex system-of-systems environments is closed. Of course, this implies that the real environment must be monitored continuously in order to systematically identify, describe and track such changes in traffic situations and behaviours of the traffic participants. One of the methodological fundamentals for such a continuous iterative process is anomaly and incident detection. Detecting anomalous, not previously observed situations, allows the developers to identify, model and simulate them in order to evaluate the effects of the different strategies and actions in these newly observed situations. This way, anomaly detection guarantees that all relevant situations are identified, collected and collocated in the scenario catalogue, and that changes in the underlying real-world situations are incorporated systematically in the scenario catalogue. Anomaly detection for new traffic situations can be carried out using simple statistical techniques, but also complex and powerful machine learning methods that can recognise anomalous traffic patterns.

2 Integral Safety and Advanced Driver Assistance Systems (ISS/ADAS)

Recalling Fig. 4.1 in Chap. 4, safety constitutes a major benefit of automated driving in general and of platooning in particular. A portion of the research effort therefore is concentrated on automotive safety.

Our approach is founded on the following principles:

  • Assuming a probabilistic/stochastic point of view.

  • Consequent top-down instead of bottom-up system requirements development.

  • Analysis of field effectiveness instead of test effectiveness.

  • Increasing integration of simulation.

2.1 Use Case-Based Representation of Requirements

Adopting use case-based specifications can greatly simplify the task of defining required system behaviour by allowing the system engineer to focus on one use case at a time. The system engineer can then assign the desired system behaviour to each use case. Of course the number of use cases required to sufficiently specify the system depends on the problem to be solved. Covering the complete field of operation of ISS/ADAS may easily require several thousand use cases (note that we use this term in a broad sense that includes “misuse cases” as well). In the scope of the project Connecting Austria, for example, platoon safety is of particular interest. The relevant use case is collision avoidance while platooning. Details regarding the application of this method are demonstrated in Chap. 9.

Stochastic simulation allows to generate the required diversity of use cases in an economically feasible way. However, this raises the need for new methods to cope with large numbers of requirements, including the following:

  • Heuristics or physically motivated rules must be derived that allow an automatic assignment of the desired system behaviour to each use case.

  • Numerical conflict analysis techniques can identify requirement conflicts (or prove their absence) in a large set of use cases.

  • Metrics are needed to monitor the coverage of the field of operation.

To address these issues, a multi-layered Monte Carlo simulation technique named incremental probabilistic simulation has been devised and implemented by the authors.

2.2 System and Component Rating

The structure of requirements on ISS/ADAS can be expressed in terms of the UML model given in Fig. 7.7. Requirements on the system (top level) are broken down into requirements on single components (lower levels) reflecting their interdependencies.

Fig. 7.7
figure 7

UML use case model for integral safety with one function specification highlighted as an example

The authors are convinced that requirements on ISS/ADAS should be developed by following an iterative, cyclic top-down path through the use case diagram; a similar opinion is expressed in [5]. Since pure top-down development may easily lead to unsatisfiable requirements on the lower levels, the development should be accompanied by feasibility studies. Such feasibility studies may confirm assumptions and thus back the top-down approach to system requirements development.

The use case model lends itself to pointing out the distinction between system and component ratings. In terms of this model, a system rating (Fig. 7.7) measures the degree of fulfilment of system-level requirements (levels safety system and/or strategy). An example result of a system rating is “20% reduction of collisions”.

In contrast to system ratings, a component rating (Fig. 7.7) reflects the degree of requirements fulfilment of the layer directly above. An example result of a sensor component rating might be “80% of objects are detected before the specified time to fire”. While component ratings have some value, system ratings have far more significance for the ISS/ADAS system developer.

This line of thought motivates a very powerful technique, namely system rating with ideal component models; see Fig. 7.8. In early stages of development, due to the top-down paradigm, little is known about the properties of lower-level components. Assuming underlying components to be ideal (i.e. without physical or conceptual limitations, for example a sensor that always sees everything or an algorithm that always decides correctly) allows establishing theoretical performance limits. So, the implications of design decisions can be explored.

Fig. 7.8
figure 8

Evolution of component models during system development

As an example, consider the development of a collision avoidance system. Assume this system should focus on front crashes. Following the top-down paradigm, one would begin the development by generating use cases that represent relevant accidents as well as non-accidents, e.g. using vehicle dynamics simulation software.

The next step is to define the system’s behaviour in any use case. A first idea might be that the system should brake autonomously in order to avoid the collision. Assume a preliminary safety integrity level (SIL) rating suggests limiting the reduced velocity to, e.g. 30 km/h. When the collision cannot be avoided within this limit, the ego velocity should be reduced by 30 km/h at collision. Next, a precise definition of “front crash”, including a quantitative evaluation criterion, is required in order to determine if a use case requires triggering the brake.

Having defined actions and domain of operation in this way, the theoretical limit of system effectiveness can be established.

Since limitations of algorithm, sensor and electronics have not yet been considered, a real system cannot be expected to actually meet this performance.

If the theoretical limit of system effectiveness is promising, the next step is to study the field of operation. This includes defining criteria that describe exactly which use case subset should be addressed by the ISS/ADAS. One might want to disable the autonomous brake when the driver brakes/accelerates or steers in order to avoid interfering with the driver’s intent (compliant to the Vienna Convention on Road Traffic, see [3]). The effects of such system restrictions can be studied by including them in the system model and re-evaluating system effectiveness. Continuing the development, studies may include different sensor models or draft algorithms to further elaborate the system model. Along different project phases, the effectiveness ratings become more accurate as the system model converges towards the real system. Following this approach allows directing efforts to the most promising concepts. Since concepts and implementations of system components can be rated in a comparable way, it is also possible to identify components with the largest impact on system performance and to attribute performance losses to concept or implementation. Altogether, this approach massively front-loads development efforts and allows for a quantitative assessment of engineering decisions, both being highly valuable properties of a development methodology.

2.3 Data Mapping, Representativeness of Use Cases

It is evident that the test cases required for releasing ISS/ADAS depend on the specific system. An autonomous braking system has to be tested against different use cases than, for example, adaptive cruise control (ACC) or pedestrian warning systems. In early phases of top-down development, when a system specification is yet incomplete and has many degrees of freedom, a large variety of (potential) use cases is required and can be provided by simulation. When the system design matures, the use cases relevant for the specific system become more apparent. In early phases, development is primarily guided by simulation. As soon as the first physical system prototype becomes available, simulation may be supplemented by real tests.

While the cost per use case continually increases during development, the set of use cases that require explicit testing is expected to shrink due to better understanding of case relevancy. This raises the following issues:

  1. (a)

    The system-specific relevant cases must be identified.

  2. (b)

    Results of a small number of real tests have to validate results derived from a large number of simulations.

  3. (c)

    A large number of simulations and a small number of real test cases have to be combined to form a line of argument that justifies a system release.

Resolving these issues requires intensive interdisciplinary collaboration. Issue (a) is already addressed in, for example, engine calibration by methods like “Online Design of Experiments” or “model-based calibration”; see [4] . Issue (b) can be tackled by approaches from areas such as statistical material modelling like “Operation Monitoring” [2]. Issue (c) requires the joint effort of consumer and standards organisations, product liability departments and engineers.