Keywords

1 Cooperation Between Human, Co-system and Environment

Cooperation in automated driving is a bridging paradigm connecting many facets, e.g., cooperation between machines and machines, between humans and humans as well as between humans and machines. Cooperation does not necessarily need similarity among cooperation partners. Compatibility, however, is a crucial requirement. It needs to be sufficiently developed between the outer borders of the cooperating sub-systems (outer compatibility) and between the inner, often cognitive, aspects of the cooperating sub-systems (inner compatibility) [9], leading to outer and inner cooperation. In these complex systems, not only the humans and machines in the directly acting human-machine system should cooperate, but also the people and machines in the meta-system, e.g., in research and development.

The cooperation between multiple vehicles reflects the outer cooperation from the viewpoint of a single automated vehicle and is examined in various details in many other chapters of this book. The following chapter focuses on the cooperation of a single human with a single automation within a highly automated vehicle. Any cooperation with other vehicles, between these vehicles and with the ego vehicle itself are considered as part of the environment.

In general, there are three main entities within the system of the ego automated vehicle: The human, the co-system (including the automation and other technical subsystems), both of which are considered agents within the system, and the environment. As shown in Fig. 1, the human and co-system influence the environment through joint actions. To enable a joint action [25], the human and the co-system have to cooperate either through direct communication or through a mediator, which is represented by the center element of the diagram.

In this system model, it is assumed that both the human and the co-system may share the vehicle control and transition control between one another. The direct communication between the two agents is crucial for the co-system to communicate decisions made by a network of cooperating vehicles as well as possible actions needed by the human if the co-system reaches its limitations.

Fig. 1
A circular path diagram. A human and a co-system send intentions to a central mediator, which takes action on vehicles. The human and the co-system get perceptions from the vehicles. The human has interaction and arbitration with the co-system.

(Based on Flemisch et al. [8] and Löper [21])

Simple model for the cooperation between human and co-system. Human and co-system both act on and control the vehicle as a part of the environment.

In order to successfully design human-machine cooperation, it is necessary to align the “mental model” of the co-system with the mental model of the human [9] to include the environment, and to keep it transparent and repeatable. One tool to achieve this is a design metaphor, which has been successfully applied e.g. in the form of the desktop-metaphor (as established by Alan Kay from Xerox PARC in 1970) or the H(orse)-metaphor [8], transferring the mental model of a rider and horse to the domain of highly automated vehicles. A more generalized approach is the pattern approach based on Alexander et al. [1], applied to music by Borchers [5], to software by Gamma et al. [16] and applied to human-machine systems by Baltzer [2], Herzberger et al. [20], López Hernández [22], and others. For more details on patterns see Flemisch et al. [15] and the chapter of Flemisch et al. [7].

2 The Concept of Confidence Horizons

The idea behind the confidence horizon concept is to bring together the prediction of the time points of when and until when the human and the automation are able to control the joint system, in this case an automated vehicle.

In this sense, the confidence horizon is coupled to the prediction of the ability to execute control over the joint system. Combining the predictions for human and automation makes clear when a safe transition of control between human and automation can be expected and how automation and human need to communicate, depending on the severity of the situation. Figure 2 depicts the confidence horizon concept.

Fig. 2
2 diagrams and a screenshot. Left. The manual zone, confidence horizon, safety buffer, and partially and fully automated zones between a human and computer. Center. A road with an automation zone, safety buffer, and human control from ego. Right. A driving simulator with the confidence zone borders.

The confidence horizon as a product of the distribution of the ability to control for human and automation. Displayed in the control distribution according to (left, based on Herzberger et al. [20]) and projected onto a use case of highly automated driving (center). Application of the confidence horizon start (human) and end (automation) to the driving simulator (right)

As shown on the left, human and automation are more or less involved in the current driving task, depending on the current automation mode (e.g. manually, partially or highly automated), and the resulting distribution of control [10]. As stated in the SAE [24], starting from SAE Level 4 automation, the driver is explicitly allowed to disengage completely from the driving task, which results in a potential loss of situation awareness for the driving task, especially when engaging in a non-driving related task (NDRT) [28]. Even in lower automation levels (automation according to SAE Level 2), despite the driver’s obligation to be ready to intervene and ongoing liability for the vehicle’s actions, the driver may tend to lose awareness, a mechanism described as the unsafe valley of automation [11]. With the confidence horizon concept, we propose to make this unsafe valley visible at least to the automation and its developers, as an option also for the driver, so that she can act accordingly. The control distribution in Fig. 2 (left) shows, on the one hand, who has to control the vehicle in a given automation mode and, on the other hand, the ability of the human (in orange) and automation (in blue) to actually execute the vehicle control. Projecting the ability distribution for the human and the automation into a real situation (see Fig. 2, right) directly shows the need for a control transition due to a lack of ability of the automation to handle an obstacle in this situation. Furthermore, it shows the available time frame for a transition to the human, in which this transition has to take place (shown as safety buffer).

In a critical situation (system boundary or system failure), the confidence horizons clearly show a safety gap, i.e., a time frame in which neither automation nor human are able to control the driving related task. The confidence horizon concept enables the automation to detect such cases as early as possible and act accordingly. Depending on the time remaining until the system failure is reached and the current ability of the driver, the automation either triggers a diagnostic take-over request (TOR), in case that there is a safety buffer present before the system would fail, or a minimum risk maneuver (MRM).

We propose to use the confidence horizon concept for the design of highly automated human-machine systems to identify the proper transition strategy in case of an upcoming control gap and to predict the future ability of the human and the automation to control the joint system. However, based on our exploration results, we recommend using the confidence horizon as a basis for designing HMI designs in situations of varying criticality, including the communication strategy of the automation, rather than a simple visual representation of the confidence horizon as in Fig. 2 (right).

3 Application of the Pattern Approach to Cooperative Automated Driving

To achieve good cooperation between two agents, both need to understand each other. When designing human-machine cooperation, the challenge is to find a common language. A promising solution is the approach of interaction patterns to find common ground at large scale. Based on Alexander et al. [1], Flemisch et al. [14] describe a pattern as follows:

A pattern describes something that occurs over and over again. An example for this is a problem and/or its solutions. If this can be observed, and its core can be mapped and modelled, you can either observe and match the pattern over and over again, without ever making the identical observation twice. And/or you can instantiate and design with this pattern over and over again, not necessarily doing it the same way twice. Examples for this are designing, engineering and using of artefacts like human-machine systems. Flemisch et al. [14]

Alexander et al. [1], Borchers [5] and Baltzer [3] use patterns to describe a solution to a given problem and propose a pattern language for the design of patterns. Another focus is set by Flemisch [6] and López Hernández [22] on the structure of the solution by describing in detail the sequence of interaction within a pattern. This focus, however, further tailored to matching a given pattern instance for the case of cooperatively interacting vehicles, is also applied in the proposal of the authors.

When using the pattern approach for active cooperation, the pattern structure is extended by a set of properties to detect which cooperation partner should, wants and is currently performing a given pattern, resulting in a new subset of patterns: Cooperation patterns. In the proposed setup, all properties are predicted by the co-system. Each property can be described by a sub-pattern, so that if the sub-pattern matches the co-system, the activation value and confidence for the respecting property increases as well.

The fundamental properties of a cooperation pattern are utility, ability, intention and execution. Utility describes how useful the activation of the current pattern would be for the respecting agent. Ability represents the agent’s ability to execute the pattern now and in the near future. Intention describes the agent’s inner determination to execute the pattern, while the execution property describes the matching of the agent’s actual current action with the actions required to execute the pattern at hand.

Derived from the cooperation pattern, the relevant patterns are activity patterns and transition patterns. Applied to cooperative vehicles, there are driving related and non-driving related activities (see Fig. 3).

Fig. 3
A block diagram. Transition pattern T O R leads to 2 activity patterns for driving-related activity with an automation symbol, and non-driving-related activity with a human symbol.

Simple illustration of the change of focus on an activity using transitions. The symbol in the right corner of the activity shows which agent is currently active the most in the respectin g activity

Both agents, the human as well as the automation, can focus on one of these activities. They can change their own focus and try to change the other’s focus by starting a transition pattern, e.g., a takeover request (TOR).

Figure 4 depicts the pattern network for the application in transition control for highly automated driving. It displays the same process as in Fig. 4 with the patterns as states and for each agent individually. On the most basic level, the activity of human and automation can be considered as driving related or non-driving related. Since activity patterns are derived from cooperation patterns, they contain their properties for the utility, ability, intention and execution of the activity by both agents according to the co-system’s prediction. The same applies to transition patterns.

Fig. 4
2 transition diagrams. Left. Driving-related activity is followed by transition pattern handover, non-driving-related activity, and transition patterns T O R and M R M. Right. Non-driving-related activity is followed by handover, driving-related activity, and transition patterns T O R and M R M.

Possible transitions between activity and transition patterns for human (left) and automation (right) in case of a take-over initiated by the co-system

The detection of the ability of both human and automation to execute the driving related ability directly reflects the current state of the confidence horizon. Transitions are used to switch from one activity to the other. Various transitions are available based on the initiator of the transition, the current size of the safety buffer in the confidence horizon and the predicted ability to execute the target activity for human and automation. It should be noted that, in the case of a transition, both human and automation have to change their activity. As part of the co-system, a mediator arbitrates conflicts between human and machine [4] and provides transparency of the automation’s behavior to maximize the overall utility of the human-machine system. This mediator makes all joint decisions. It is the mediator’s responsibility to let the co-system initiate a certain transition or prevent the human from using a transition that is not feasible for the system. Figure 4 illustrates the possible transitions between activity and transition patterns for both human and automation, assuming that each agent is focused on a single task at any given time. In this application, the automation can initiate a take-over request (TOR) that, if successful, leads to a change in activity for both agents, or be pushed into a minimum risk maneuver (MRM).

A combined representation of both diagrams of Fig. 4 is shown in Fig. 3, highlighting that all activities are considered states with the properties of utility, ability, intention and execution for each agent. Additionally, an agent is not limited to focus on a single activity, but rather uses transition patterns to change focus from one activity to another.

Applied to human-automation cooperation in cooperatively interacting vehicles, this could be implemented as follows (Fig. 5): The co-system detects a safety gap ahead and needs to transition the human activity from a non-driving related to the driving related task. This has to be done before the safety gap comes to close. Otherwise, the co-system has to initiate a minimum risk maneuver, which, however, might involve a higher risk than a successful take-over of the human. Figure 5 depicts this situation at time \(t_{1.1}\). If there is enough time to hand over control to the human, the co-system starts a two-stage take-over pattern (as based on e.g. Rhede et al. [23], Winkler et al. [27] or Guo et al. [17]) to let the driver gain situational awareness and enable the driver to take back control safely. Depending on the predicted ability of the driver, the first warning might be sufficient, or, the second warning stage has to be triggered, starting at \(t_{1.2}\). If the transition fails because the human is either unwilling or unable to take over in time, according to Herzberger et al. [19], the co-system starts another transition to the MRM and aborts the take-over transition, leading to \(t_{2.1}\). Only if the transition is successful, control is transferred to the human and the automation accordingly loses control over the driving related activity (\(t_{2.2}\)).

Fig. 5
A swimlane diagram with a timeline. An environment and a use case on top with a broken-down car are followed by a human with driving-related activity, a mediator with transition pattern M R M, and an automation with driving-related activity. Transition pattern T O R is across the human and mediator.

(Herzberger et al. [20], modified)

Swimlane diagram for the take-over process. The red frames show the connection of the actions to the respecting pattern.

4 Exploration of the Confidence Horizon Cooperation Design

To explore the design options for the cooperation between human and co-system and in particular the HMI used in the use case of a breakdown vehicle, a Human Systems Exploration (as described by Flemisch et al. [13]) was conducted at the IAW Exploroscope.

The chosen use case was the appearance of a stopped vehicle on a three-lane highway in the center lane with traffic to the left lane. To avoid a collision with the vehicle in front, there are two possibilities: Either one breaks and stops in front of the vehicle, staying vulnerable to traffic from behind, or one changes to the right lane to avoid a collision. It is assumed that the automation is unable or not allowed to performFootnote 1 the evasive maneuver.

The setup consisted of two scenarios representing the safety buffer and safety gap cases in two different severity levels of time to collision (TTC) with \(TTC = 10\,\textrm{s}\) and \(TTC = 3\,\textrm{s}\), indicated by the distance between the ego and the breakdown vehicle.

A total of \(N = 12\) persons (\(41,67 \%\) female, \(58,33 \%\) male) with an average age of 30 (\(\sigma ^2 = 7,98\)) years participated in the exploration. Due to the Covid-19 restrictions in 2020, the exploration was conducted partly on-site (\(n = 5\) participants) and online (\(n = 7\) participants). A digital whiteboard tool was used for documentation in both cases.

Participants were shown all four resulting situations on a digital whiteboard with the confidence horizon markings (cf. Fig. 6, right) displayed for reference and asked to share their thoughts on how the co-system should communicate a take-over request to the human. They were given the task of drawing a sketch of their proposed head up display (HUD) concept.

Fig. 6
A photograph and a screenshot. Left. 2 people stand in front of a large computer display in a room. The display has 4 simulation frames of a road with cars. Right. A screenshot of a simulation of a road with cars in the daytime, from the perspective of a car with controls in front.

Snapshot from the on-site Exploration (left) and situation as it was displayed in the online Exploration (right) to show the view of one situation as in the simulator

As a first finding, it should be noted, that only one in 12 (\(8 \%\)) would display the confidence horizon (as in Fig. 6, right) directly to the driver. \(42 \%\) of participants would display the confidence horizon only for the ability of the co-system and under certain conditions. And \(50 \%\) would never display it to the driver, especially because predicting human capability is perceived as confusing or uncanny, and displaying information in an area where the co-system cannot control the vehicle is considered plausible. From these results, it is concluded that the confidence horizon might be a useful tool for cooperation design and to initiate transitions in foresight but should only be used with caution as a too detailed HMI element.

Participants also noted that the information displayed in the visual HMI should be limited to focus attention and that they prefer not to read text in a critical situation. \(33 \%\) indicated that a general warning message in the corners of the visible area would be useful. \(42 \%\) commented positively on the visualization of a lane change trajectory as well as the display of the center lane trajectory with changing colors indicating the criticality of the distance to the obstacle ahead. Figure 7 shows the proposal for the safety buffer scenarios combined from all the results collected. The participants wanted to be shown how much way they still have before the situation becomes too critical if they do not react. The left lane is shown as blocked and an arrow indicates the possible lane change to the right lane. An icon in the center of the field of view indicates necessary action. The broken-down vehicle is highlighted with a frame in warning color (red), annotated with the remaining distance in meters. In the corners of the field of view (might be realized as part of the HUD or ambient lighting), light flashing colors emphasize the possible and impossible directions.

Fig. 7
Two screenshots of simulations of a road. Left. The left lane is shaded, zones in different shades are on the central lane, 2 icons are drawn at the center, and an arrow curves to the right. Right. A car is parked in the left lane with zones and an icon at the center, and a right-curved arrow.

Examples for combined hand-drawn HMI concepts from the exploration workshops on safety buffer scenarios. Left: scenario for \(TTC = 10\,\textrm{s}\); right: \(TTC = 3\,\textrm{s}\)

The safety gap scenarios were not fully understood by most of the participants. The main reason for that was that it is difficult to understand why the co-system would provide information on the situation despite it itself failing in the very moment. This shows that it was unclear to the participants that situation awareness and ability to execute the driving task are separated in the case of the co-system. Most importantly, participants wanted transparency of the automation’s actions for both cases. For example, the co-system should inform the driver, that a minimum risk maneuver is being executed and that the driver may only take over control after the maneuver is completed.

5 Simulator Study of the Confidence Horizon Cooperation Design

To evaluate the proposed application of the confidence horizons, a study with \(N = 20\) participants was conducted in the static driving simulator at the IAW Exploroscope of the RWTH Aachen University. The study produced much more results than can be shown in the last part of this chapter, so that only an overview can be given, with more detailed publications to follow. The study tested three different designs in two different use cases. The use cases were:

Use case 1 “Avoidance of broken-down vehicle”, starting on the highway in SAE level 3/4, where drivers engaged in a non-driving related task had to take over control and avoid to the obstacle by changing from the center to the right lane, as the left lane is blocked by fast dense traffic.

Use case 2 “Avoidance of collision at X-intersection”, starting on a rural road in SAE level 3/4, where drivers engaged in a non-driving related task had to take over control and avoid a collision with a vehicle coming from the right.

Since the use cases are already very detailed here, they could be considered as use situations. In order to maintain the conceptual connection to the other chapters, we will nevertheless continue to refer to use cases here.

Each participant experienced both use cases and one of the three cooperation designs:

Design 1 is the baseline: Here, the driver only receives an acoustic takeover request from the automation combined with an immediate dropout/ deactivation of the automated system.

Design 2 is a combination of the first design with an MRM (Minimum Risk Maneuver). If the driver does not intervene after the drop out, emergency braking is automatically initiated.

Design 3 is a more complex attention sensitive design that combines the ideas of the confidence horizon: On the one hand, the driver’s ability to take over is determined by his orientation reaction, as proposed in the diagnostic TOR approach [19]. On the other hand, the capabilities of the automation are derived from the tested use cases. If the driver is classified as not ready to take over, a second warning stage is initiated. Here, depending on the human’s reaction to the TOR, her or his ability to execute the driving task, and the time remaining before the accident, the interaction mediator decided to either immediately return control to the automation, wait until the human was ready to take over, or immediately transfer control to the human. Thus, the time advantage resulting from the detection of the readiness to take over (see chapter by Herzberger et al. [18]) is used to either trigger a second warning, with a still possible strong MRM, or an early and comfortable MRM. As in design 1 and 2, the driver in design 3 receives a TOR that is combined with visual warnings, based on the results from the exploration (see Fig. 8), in the HUD.

Fig. 8
2 diagrams of use cases and 7 photographs of drivers at the controls of a simulator with screens that have central dots. Use cases 1 and 2 are for L 3 by 4 intersection and highway. The photos are for design 1 warning plus dropout, 2 warning plus M R M, and 3 warning plus M R M attention sensitive.

Simulator study of confidence horizon designs in two use cases (\(N = 20\), Snapshots from the gaze scene video, blue dot represents drivers gaze)

The photo at the bottom of Fig. 8 shows the HMI from design 3 in the highway use case with the broken-down vehicle. Here, the left lane, which is occupied by fast moving traffic, is covered by a semi-transparent red wall. In addition, a hands-on symbol is displayed above the road, along with the text “please take over” (in German). Starting from the ego-vehicle, a possible safe trajectory to the right lane is suggested by a green turn arrow. The clear right lane is also indicated by a green check mark at the bottom right of the windshield. In both designs with MRM (design 2 and design 3), the emergency braking can be overridden and it does not start until it is detected that the driver is not responding to the TOR. Figure 9 shows a tree or state-transition diagram of the three designs.

Fig. 9
3 state transition diagrams. Design 1 for warning plus dropout has transition to manual from T O R issued regardless of action or gaze focus. Design 2 for warning plus M R M has actions on pedals or steering wheel. Design 3 for warning plus M R M-attention-sensitive has N E U or E U gaze patterns.

Tree representation of the three designs of cooperation patterns

\(N = 20\) subjects participated in the study (\(45 \%\) female). The age of the participants ranged from 18 to 54 years (\(M = 28.90\) years, \(SD = 12.57\) years). The results of the Karolinska Sleepiness Scale (KSS) as well as the Sofi scale, which measure the fatigue of test subjects, did not differ significantly between the takeover design groups. Subjects were randomly assigned to the use cases intersection and highway and to the designs, resulting in each subject experiencing one design and both use cases. The distribution of subjects was carefully balanced so that, as far as possible, there were an equal number of subjects in each design and in each possible use case sequence combination. \(n = 6\) were assigned to design 1, \(n = 7\) to design 2 and \(n = 7\) to design 3. All subjects experienced each use case twice. The first use case trial is referred to as \(t_1\) and the second trail as \(t_2\).

6 Results and Discussion

The evaluation was carried out in accordance with the principle of balanced analysis, which combines and balances subjective with objective, quantitative with qualitative, individual with averaged, and time-longitudinal with time lateral perspectives (see Fig. 10, e.g. Flemisch et al. [12]).

Fig. 10
A concept map for balanced analysis. Time includes longitudinal and lateral. Subjectivity includes qualitative and quantitative for use cases 1 and 2. Objectivity includes takeover performance for quantitative, and video and gaze replay for qualitative. Person includes individual and averaged.

Principle of balanced analysis in the example of the driving simulator study

The subjective data are further subdivided into results from the closed and open questions (quantitative vs. qualitative). An extraction of the objective results is shown in Table 1. Here, the takeover success by design and use case is presented.

Table 1 Takeover success by design and use case

Not surprisingly, the results reveal that across all designs and situations, subjects took over more successfully at \(t_2\) than at \(t_1\). Contrary to the hypothesis that subjects in design 3 were fundamentally more successful in taking over the driving task than in designs 1 and 2, it appeared that design 3 performed better than design 1 only in the intersection use case. In the highway use case, however, the results were inverse, indicating an effect of the cooperation design, or of the experimental design. However, these influencing effects need to be investigated in more detail to avoid potential side effects of the more complex attention-sensitive design, and realize the true potential of the concepts, already seen in the results in one of the two use cases, in the future for all use cases.

Fig. 11
10 line graphs for pattern data of participant number 9 plot A O I, G F, steer, pedals, and S P versus time in seconds. Use case 1 for success has larger peaks for A O I, steer, and pedals. Use case 2 for failure has larger fluctuations for G F and S P.

Example of a data set for a single participant. (AOI = gaze area of interest, 1 = front view, 2 = instrument cluster, 3 = center stack, 4 = mirror left, 5 = mirror right, 6 = rear mirror; GF = sum of normalized grip force activation; Steer = steering angle [deg]; Pedals = normalized pedal activation (straight line: accelerator, dashed line: brake); SP = change of seat pressure focus point (straight line: longitudinal coordinate, dashed line: lateral coordinate))

Analysis of data related to driver ability in both use cases and all designs was conducted based on aggregated data sets, as shown exemplary in Fig. 11. The data set consists of gaze AOI (area of interest) data, grip force on the steering wheel, steering angle, pedal activation and seat and seat back pressure. Data sets were evaluated to find a most universal pattern, which describes the ability or inability of the human driver to takeover control after the TOR was issued.

Regarding the ability of the driver, results indicate a possible detection of the inability to take over. Gaze behavior shows, that only \(11.7\%\) of successful drivers did take a look at any mirror more than once and tend to have a stable gaze on the road, which tends to lead to a successful takeover, however, it does not guarantee it.

While the initial driver gaze gives a hint on the early orientation behavior of drivers, its analysis also leads to the conclusion, that a successful takeover is not describable by driver gaze alone, hence more data points (c.f. Fig. 11) were added to the analysis.

Fig. 12
2 block diagrams. Top. Transition pattern driver takeover has can with orient gaze on road followed by want with prepare grip force greater than 0 and do with perform reasonable action. Bottom. Unsuccessful driver takeover has gaze not on road, prepare grip force = 0 and perform no or hazard action.

Top: Structure of the successful takeover pattern. Drivers follow the pattern of orient, prepare, then perform. Time annotations of each interaction block shows the time window in which the event has to occur for the takeover to be still successful in both use cases, with \(t =\) observation time and \(t_0 =\) time point that TOR is issued by the automation. Bottom: Structure of the unsuccessful pattern. The driver fails to fulfill the subpatterns in the given time frame. Activation of subpatterns does not necessarily follow an order in this case

The combination of gaze, grip force and driver input (pedals and/or steering wheel) leads to a first model of a pattern for the successfulFootnote 2 control transition to the driver after the TOR was issued by the automation. Figure 12 displays the successful (Fig. 12 top) and unsuccessful (Fig. 12 bottom) pattern found. \(87\%\) of all successful drivers followed the successful transition pattern, while \(95\%\) of all unsuccessful drivers followed the unsuccessful pattern, which hints towards a better performance of the unsuccessful pattern. Focusing on the orientation and preparation stages of the pattern alone, still \(82\%\) of both successful and unsuccessful transitions are being detected.

This analysis and first pattern model give an orientation on how to implement the human part of the confidence horizon, however, the transfer from post-processing to an online detection of the confidence horizon still has to be made. A more detailed report on the analysis and found pattern will be published in the near future [26].

The subjective, qualitative results from the balanced analysis provided a variety of indications for possible causes as well as further adaptation options for the HMI. For example, several subjects from all designs (\(n = 6\)) stated that they would like to see a TOR notice on the tablet. Furthermore, a clearer description of the hazard situation via a voice output instead of just a sound was desired (\(n = 4\)). The participants’ statements on perceived criticality, subjectively perceived takeover quality, and stress did not differ significantly between the designs, which is probably due to a small sample size. A detailed evaluation of the results and recommendations for the further development of the confidence horizon concept will be published in the near future.

7 Conclusion and Outlook

The initial concept of confidence horizon, in conjunctions with new ideas of diagnostic take over requests (described in more detail in the chapter by Herzberger et al.), helped us to open up a new direction of attention and ability sensitive design of automated and cooperative systems. The concept can support design and development teams in cooperative vehicle automation, but also in other domains where machines and humans cooperate, to dynamically balance abilities of agents, and to design and engineer the transitions of control in a more transparent way compared to the traditional “on/off”-thinking. With design explorations and experiments, some of which were described here, we were able to cut through a vast design and use space at least in the driving simulator, and to identify the most prominent dimensions of the vast space of possibilities. Even if we are far from really mastering this new space of attention- and ability-based transitions, the chances are good that in close cooperation with other research projects e.g., from the DFG priority program CoInCar, the first design patterns can already be transferred to real vehicles and products. Equally important, we have paved the ground for more research which will be necessary to fully master this design and use space of transitions, as an important aspect of cooperatively interacting vehicles and human machine cooperation.