6 The Interaction Between Humans and Autonomous Agents

Humans represent knowledge and learning experiences in the form of mental models. This concept from the field of cognitive psychology is one of the central theoretical paradigms for understanding and designing the interaction between humans and technical systems.


The Design of Automated Systems
The question of user-appropriate design of automatic systems has been the subject of scientific discussions for decades [e.g. 2,3]. With the ever-expanding capabilities of technical systems, the issue is becoming increasingly important. Experience from various domains-notably including aviation-with the (partial) automation of technical systems has demonstrated that the safety and reliability of such systems cannot be achieved solely through the optimization of technical components. Indeed, the reliability of automated systems is largely determined by the quality of the interaction between the human and the machine. This applies in particular to situations in which the human is obliged to correct errors by the technical system and assume system control in the event of breakdowns or malfunctions.
Automation brings with it a shift of functions to technical systems that significantly changes the role and required capabilities of the human. For instance, in modern airplane cockpits, computer systems (e.g. flight management systems or autopilots) take over tasks that were previously carried out by the cockpit crew. The requirements for the pilot thereby shift from active manual control functions to tasks of programming and monitoring the aircraft automation. In aviation, for example, this human-performed monitoring function known as "supervisory control" [4] has made piloting easier and led to significantly enhanced flight safety [5]. At the same time, the psychological effects of the passive role of the system monitor, such as reduced attentiveness or activation, have caused massive safety problems [6]. Brainbridge [7] speaks of the "irony of automation"-system functions are automated due to the fallibility of humans, and yet precisely this human is supposed to monitor the system and stand by as a fallback option in case of emergency.
The problems arising from the supervisory control design concept are extensively documented in the "human factors" scholarship and are subsumed under the heading "out-of-the-loop-unfamiliarity" (OOTLUF, [8]). The negative consequences of disconnecting humans from direct guidance and control are primarily concentrated in three areas which have been identified in different application contexts: insufficient or excessive trust in the automation [9], the loss of manual and cognitive capabilities [10] and difficulties in maintaining an appropriate degree of situation and system awareness [11]. An inappropriate degree of confidence in the system can result in insufficient monitoring or use of automated systems. Trust in automation is influenced by the reliability, comprehensibility and perceived usefulness of the system. The effects of the loss of manual and cognitive capabilities become salient at the moment when the user, faced with a malfunction of the automation, is suddenly forced to resume control of automated functions. Insufficient training and practice of skills can lead to decreased effectiveness in terms of both motor and cognitive skills. The "out-of-the-loop" effects are particularly noticeable with regard to perception and the correct interpretation of system processes-i.e. situation awareness. The reasons for insufficient situation awareness flow primarily from insufficient monitoring of the system, changes to or complete breakdown of feedback (e.g. tactile stimuli from the steering wheel), the lack of transparency of the automation and inadequate understanding of the system due to complexity. From a cognitive psychology standpoint, humans lack the corresponding mental models (i.e. knowledge and skill structures) to understand how the automation works [12].
The negative experiences that resulted from technology-centered design approaches have led to a reconsideration of system design. Due to this imperative to keep the human "in-the-loop" by ensuring controllability, transparency and predictability, the concept of human-centered automation has largely established itself as the dominant design principle for automated systems [e.g. 13,14]. The fundamental premise here is that the human bears ultimate responsibility for the overall system regardless of the degree of automation. In this context, man and machine are regarded metaphorically as cooperating partners [15]. Design concepts for adaptive automation pursue this aspect even further and allocate functions to the human and the machine dynamically depending on situational requirements [16]. Extensive studies of the application of these design strategies have identified the benefits, but also underscored the difficulties and future challenges associated with them [e.g. 17].
The rising complexity and autonomy of socio-technical systems, however, casts doubt on the appropriateness of the imperative of human responsibility and confronts existing concepts with the problem of designing conflict-free interaction between two autonomously deciding system elements-the human and the machine [18,19]. The human-centered design approach therefore requires a more thoroughgoing development or indeed overhaul [20], which in turn may only be possible by way of a broad-based societal discussion on fundamental questions with regard to the desired role of automation in everyday life [21]. Use contexts and frequencies as well as the skills and expertise of users, however, vary substantially across the different domains, so it may be necessary to devise specific design concepts for the automotive sector that adequately reflect the heterogeneity of car drivers.

Automation in the Car
In the automotive sector as well, the transition of the human role from active operator to passive supervisor of the system is advancing apace. Media reporting on the subject of autonomous driving conveys the impression that driverless vehicles will improve road safety in the near future [e.g. 22]. Yet although even today individual functions in vehicles are performed by automated functions such as adaptive cruise control, in the foreseeable future the technology will not be able to dispense with the availability of the human driver, who will continue to assume control functions and make strategic decisions [23].
Still open is the questions of how best to define the role of the human along the path to completely autonomous vehicles in a way that is both psychologically apt and commensurate with user requirements. While the insights and experiences from the aviation sector described above provide an interesting starting point for addressing this question, their usefulness for design concepts in the automotive field is limited due to the greater complexity and dynamism of the environment in road traffic. A growing number of studies in recent years has focused on the interplay between partially and highly automated driving functions and human behavior [see also 24,25]. Here too, the focus of these deliberations is the familiar problematic issues with regard to automation across a range of different automation levels: trust, skill atrophy and situation awareness.
Automation is only useful if the operators trust the technical system and thus also use it. The central challenge in designing automated systems is to generate sufficient trust in the systems. At the same time, errors in the automation can lead to an erosion of trust [26]. Excessive trust, meanwhile, can lead to insufficient monitoring and control of the automation ("overtrust" or "complacency" [27]). The majority of studies on the subject to date have focused on the reciprocal effects of trust in the use of Adaptive Cruise Control (ACC). A certain degree of trust can even be an important prerequisite for the willingness to use driver assistance systems [28]. In a longitudinal section study in a driving simulator, Kazi et al. [29] investigated the effect of the reliability of ACC on the perceived trust in these systems. The results show an increase in trust over time for reliable systems, but not commensurate with the objective reliability of the automation. Koustanai et al. [30] come to similar results in their study, which looked at changes in behavior and trust through the systematic graduation of experience levels in the use of collision warning systems. The participant group with the highest level of experience produced no accidents in the simulator and in critical situations reacted more appropriately than drivers with less experience. The level of system experience was also positively correlated with the expressed trust in the system, albeit without influencing the acceptance of the automation. In contrast to these findings are the results of several studies that found no significant change in trust levels in ACC through repeated use [e.g. 31,32]. The causes of these inconsistent results could include moderating factors that have been examined in recent studies. Flemisch et al. [33] and Beggiatio et al. [34] emphasize the significance of analogous (previously established) mental models regarding the functionality of the respective automation. Verberne et al. [35] and Waytz et al. [36] take things a step further.
On the basis of experimental studies, they show that divided intentions and needs between the human and the machine, and anthropomorphic characteristics of the automation, can be further important factors in establishing trust in automated systems.
Guiding a vehicle demands a wide range of capabilities and skills of the driver, both on the perceptual-motor level (e.g. steering, shifting gears, etc.) and the cognitive level (e.g. making decisions, focusing attention selectively, etc.). Automated execution of these tasks can lead to the loss of the respective skills and at the same increase dependence on the technical system [37]. The fundamental significance of the subject was underscored by a recent safety alert issued by the United States' Federal Aviation Administration [38]. The alert calls on pilots to choose the manual flight mode instead of autopilot more frequently as the loss of skills due to insufficient practice represents an increasing safety risk for aviation. Although the author is not aware of any studies on the problems of skill loss in (partially) automated vehicles, it may be presumed that these effects also occur in the field of vehicle automation. Adaptive or cooperative automation concepts offer the opportunity to counteract such problems and help maintain critical driving skills until completely autonomous vehicles become a reality.
The ability to correctly perceive and interpret complex and dynamic driving situations is predicated on a series of cognitive processes (e.g. attentiveness, memory, mental models) [12]. Monotonous monitoring tasks or distraction by other activities (e.g. using a telephone) can result in these processes not being adequately available for situation awareness in the vehicle. These effects can occur even in the use of systems with a low degree of automation such as Adaptive Cruise Control (ACC). Buld et al. [39] were able to demonstrate that drivers using ACC neglected certain aspects of the driving activity and environmental conditions and consequently incorrectly interpreted system limits. Increased lane drift and late reactions to critical events were interpreted in a study by Ward [40] as indicators of reduced situation awareness while driving with ACC. The analyses of Ma and Kaber [41], however, suggest that situation awareness can also be improved through the use of ACC. A more differentiated picture of these contradictory results is provided by recent studies on the consequences of highly automated driving. In a simulation study, Merat et al. [42] examined the effects of performing a secondary task on driving behavior during automated driving. The study showed that reactions to critical incidents in highly automated and manual driving conditions without a secondary task were comparable. Distraction by a secondary task, however, resulted in significantly higher-speed driving following manual takeover from the automated system. The authors attributed the finding to the reduced situation awareness due to the distraction posed by the secondary task.
The problematic issues raised here represent just a sampling of the challenges that need to be resolved with regard to the interplay between humans and automated vehicles. Many questions with respect to the mental adjustments and changes will only be answerable following the concrete implementation and scientific study of the next-higher levels of vehicle automation (see automation levels BASt, [43]). The design of interfaces, appropriate feedback and avoiding diffusion of responsibility are topics that are being addressed today in new design concepts and implemented in the prototype stage for highly automated vehicles [e.g. 44]. Which learning experiences, reciprocal effects and changes to mental models will ultimately emerge from the use of these systems, however, can only be determined through representative, longitudinal studies.

What Are Mental Models?
Mental models are cognitive-emotional representations of objects, object relationships and processes-in short, internal representations of the external world. The concept of mental models was first used by the psychologist Craik [45], who postulated that people develop simplified models of the functioning and processes of their environment in their minds. The models are used for orientation, understanding, reasoning and the prediction of events. Craik's approach to mental models was later further developed by Johnson-Lairds [46] to describe and study deductive reasoning and language comprehension.
In the cognitive psychology literature, there is widespread consensus [see also 47] that mental models are dynamic in nature and can be described in terms of three central characteristics. First, mental models are created in the working memory and enable individuals to simulate possible actions and their consequences [1]. Thinking is thus the manipulation of mental models. Second, mental models can represent the cause and causal relationships. They generate a causal understanding of how systems function [48]. Third, mental models can change over time due to experience-i.e. they are capable of learning. The quality of the models and the conclusions based on them continue to develop through specific learning experiences [49]. With increasing expertise, the understanding of technical matters moves from concrete to abstract representations-a relevant factor for the human-machine interaction.
The applied fields of study such as technology design in some cases follow different interpretations of the definition of mental models [see also 1] which can be explained by the different activity contexts. Yet even earlier work underscored the significance of the concept of prediction and the understanding of human behavior in interactions with technical systems [e.g. 50]. Mental models are thus based on context-specific expectations and prior experience as well as the current perception of system characteristics. They form the foundation of the user's understanding of the system and decision-making. This means that both the error-free use and trust in technical systems is largely determined by the degree to which the functioning of the machine is compatible with the user's expectations [33].
Compatibility in the context of mental models is defined in terms not only of operability, but also the user's experience and general acceptance of technology. Zhang and Xu [51] postulate in this regard a modification or restructuring of existing mental models with the introduction and use of new technologies. A lack of compatibility can lead to frustration and negatively impacts acceptance and diffusion rates [52]. However, if new systems correspond with expectations (i.e. the existing mental models), this results in heightened system trust and a positive user experience [53].
Mental models thus comprise representations of human knowledge, attitudes, values and emotions that interact with their environment. With respect to the automation of vehicles, both the cognitive-psychological processes of information processing and the influence of higher mental structures (e.g. needs, expectations, wishes, etc.) are important. The interdependency of these different levels has been emphasized in theoretical models on the role of the driver in automated vehicles [e.g. 54,55]. Ultimately the appropriate modification and adaptation of mental models will play a major role in determining the nature and frequency of use, as well as the acceptance of these systems. The successful transition-as yet undefined-of the driver's role in automated vehicles therefore requires an integrative examination of the scholarship on human behavior in partially and highly automated systems as well as the emergent ideas and requirements with regard to Full Automation Using Driver for Extended Availabilities. Put another way, human centered technology design implies not only a consideration of the technical possibilities and limits, but also a focus on individual and societal values and objectives.

Mental Models of Autonomous Driving
Many people regard autonomous vehicles as a concept for the distant future. Though many people may have imagined how appealing it would be to be able to sleep or read a newspaper during a drive, knowledge about autonomous vehicles remains sparse among the general population. Decisions regarding the use and acceptance of innovations, however, are not based solely on rational knowledge [56]. Contrary to the notion of humans as rational, benefit-maximizing decision-makers-homo economicus-, humans tend to employ simpler decision-making strategies which reduce the amount of information to be processed and are influenced by emotional processes [57][58][59]. Attitudes and decisions are not infinitely amenable to change merely through the provision of more information. Rather, new information is received and processed selectively so as to be in agreement with existing desires, expectations and goals-the human's mental models [60]. It is therefore crucial to the success of an innovation that the cognitive perceptions and evaluations of it can not only be integrated into existing mental models, but also appeal to the emotional side of the equation [61,62].
In addition to numerous studies on the technical, legal and cognitive aspects of the automation of vehicles, to date there have been few studies that have examined the preferences and expectations of potential users. In the largest representative international survey on the subject to date [63], the focus was primarily on the acceptance of and willingness to use automated vehicles. The results for Germany show that automated vehicles are by a majority considered as a beneficial technological advance. At the same time, half of the respondents express fear regarding automated driving and doubt that the technology will function reliably. In a comparison of multiple use scenarios, long highway trips are most commonly mentioned as the preferred potential use of autonomous driving. Interestingly, the authors find a positive correlation between the acceptance of driver assistance systems and acceptance towards automated driving. One potential explanation for that could be that the formation of suitable mental models for the characteristics of partially automated systems also has a positive impact on the acceptance level for higher automation levels [see also 34].
Which attitudes and cognitive and emotional representations underpin the acceptance or rejection of automated vehicles is still unknown. In addition to the aforementioned cognitive-psychological requirements for the design of the human-machine interaction, however, these factors represent an important prerequisite for the success of the transformation in the transportation sector. The aim of the quasi-representative online survey study introduced here was to generate a differentiated, to some extent explorative, picture of the perceptions of autonomous driving across the use cases generated in the project. The questionnaire was developed with the following overarching questions in mind: "With which mental models do potential users encounter the new role of the driver in autonomous vehicles?"; "Which automated elements of vehicle guidance are most amenable to the mental models of the users?"; "Which control functions and intervention options by the driver do potential users expect in autonomous vehicles and how can acceptance of this line of innovation be increased?"; "Which experience and design elements in autonomous vehicles can replace previous representations on the role of the driver and thus increase acceptance of this line of innovation?". (2) Special part: The questions in this part related to the four use cases developed in the project and were divided into the following ten topic groups: Free associations on the use case; willingness to use the technology; anticipated use scenario; anticipated impact on prior transportation usage; assumed fulfillment of need; emotional reactions; trust and acceptance; need for control and intervention; and preferred secondary tasks during automated driving. To reduce the processing time, the questions regarding the four different use cases (see below) in the second part were not answered by all participants. After answering the questions in the first part, the sample was split and the study participants randomly assigned in equal numbers (N = 250 in each case) to one of the four use cases. The questionnaire comprised 438 items, with each participant answering 210 questions following distribution of the use cases. The survey questions were partly taken from earlier mobility surveys [62,64] and partly new and were-in particular the questions from part two-checked for comprehensibility in a pretest.
For all attitude questions, a six-point scale was used (1 = Completely disagree, 6 = Completely agree; with some questions, the codes differed due to the content) to assess agreement with the statement. The affective significance of the terms in the field of mobility was surveyed using the semantic differential method [65]. In the three dimensions of valence, potency and arousal, bi-polar, nine-point (from -4 = extremely to 0 = neutral to 4 = extremely) scales were used in which the extremes were designated by the adjectives unpleasant-pleasant (valence), weak-powerful (potency), and calmingexciting (arousal). Current traffic behavior was recorded via selection options and frequency categories.

Sample
Participants were recruited through a commercial market research panel of the company, Respondi AG (http://www.respondi.com/de/), and paid by the same for their participation. The company assembled a participant group that was representative of the overall German population with respect to age, gender, education and income. A total of N = 1,363 people completed the survey in its entirety. Some people, however, answered the questions in such a short time that it doubtful that the questions were answered conscientiously. As a consequence, all participants whose processing time was less than 1,000 s were not included in further analysis. The sample was therefore reduced by N = 230 to N = 1133. In a further step, the distortion of the original ratios that resulted from the exclusion was corrected by removing N = 133 randomly selected females to achieve a roughly representative distribution at least with respect to gender proportionality. The average processing time of the remaining sample (N = 1000) was 1897 s (=31.6 min.) (SD = 780 s). The precise demographic composition of the sample can be taken from

Data Analysis Affective Similarity
The affective similarity between the terms evaluated via the semantic differential method was calculated as follows using the three-dimensional Euclidian distance d between the average EPA profile (E = valence, P = potence, A = arousal) of the term "ideal drive" and the average EPA profiles of the other terms: whereas I refers to the evaluation of the "ideal drive," B the respective evaluation of the other terms and the subscript letters define the EPA dimensions.

Results
The first line of inquiry was to what degree the topic of autonomous driving is even known among the general public, whether there is broad interest and how people spontaneously feel about the technology. Less than half of respondents (44 %) claimed to have no knowledge of the subject, while the majority had already heard of it (33 %), read about it (16 %) or claimed to have a higher level of expertise (4 %). A similar distribution was found in regard to interest in the subject of autonomous driving. A majority of participants (58 %) described themselves as "somewhat," "quite" or "very" interested in the subject. However, a majority (56 %) also cannot imagine replacing their current preferred means of transportation with an autonomous vehicle. Thus in spite of a relatively high degree of interest and some prior knowledge, a majority of the public manifests a certain reluctance towards the use of autonomous vehicles.
6.3.2.1 Driver Assistance Systems and Giving up Driving Responsibilities As discussed above, the use and acceptance of driver assistance systems can have a positive effect on the general perception of autonomous driving. The results of the present study show that most respondents (67 %) have already heard of driver assistance systems. Among people who use a passenger car on a daily basis (82 %), cruise control (50 %), acoustic parking assistants (46 %) and high-beam assistants are the most frequently used systems. Other systems such as adaptive cruise control (ACC, 15 %), night vision assistant (11 %), head-up display (10 %) or attention assistant (8 %) are only used by a minority in everyday situations.
The expressed desire to give up certain driving tasks and functions to an automated system yields similar results. Figure 6.1 shows the task-specific distribution of desires in the category spectrum from "absolutely not" to "very willingly." In a comparison of the different driving tasks it becomes clear that aside from the overwhelming rejection (62 % in the categories "absolutely not" and "preferably not") of the idea of completely ceding vehicle control to a driving robot, people are particularly averse to giving up the task of steering the vehicle (58.3 % in the categories "absolutely not" and "preferably not") to an automated system. At the same time, respondents view transferring parking tasks (45 % in the categories "willingly" and "very willingly") as well as safety-related assistance in the area of vehicle stabilization (43 % in the categories "willingly" and "very willingly") and pedestrian recognition (43 % in the categories "willingly" and "very willingly") more favorably.

Representations of the Driver's Role and Use Cases
Employing the semantic differential method, the study surveyed the affective significance of various terms related to different roles in the vehicle and the scenarios described in the use cases among all participants. The concept of the "ideal drive" and the conventional "car" were also evaluated in this fashion. The raw results (average evaluations on the scales valence, potency and activation) are displayed in Table 6.2.
The results were used to calculate the Euclidian distances and thus the affective similarity between the term "ideal drive" and the other terms (for methodological details see [ 61,66]). A visualization of these calculations is provided by Fig. 6.2, in which the Euclidian distance d of the evaluated terms is represented on the x-axis. Low values indicate a smaller distance and thus higher affective similarity between the terms, i.e. they elicit a stronger positive association for the respondents. It is clearly evident that "chauffeur" comes closest to "ideal drive" from an affective standpoint, while "co-pilot" least corresponds to this emotional representation. In a comparison of the various use cases for autonomous driving, it emerges quite clearly that the Vehicle-on-Demand concept deviates most strongly from the idea of an ideal drive, while vehicles with Autonomous Valet Parking are most closely associated with it. The significantly more positive affective positioning enjoyed by conventional cars in comparison to the use cases could therefore represent a major impediment to acceptance with the introduction of Full Automation Using Driver for Extended Availability in particular. As concerns the role of the driver, the affective representations revealed in the study underscore the role preference explicitly addressed in another question. In this item, participants use a slider to indicate which role they would like to assume in an autonomous vehicle (1 = passenger and 10 = supervisor). The arithmetic mean of 6.36 (SD = 2.9) indicates a preference for the role of an active supervisor who is able to maintain control over the vehicle at all times based on continuously available system information. On the affective level, the role of the passive passenger (d = 2.1) is still visibly remote from the desired ideal (d = 0).

Cognitive and Emotional Representations of the Use Cases
As described above, the overall sample in this part of the questionnaire was randomly divided into four subgroups of equal size (each N = 250) and assigned to one of the four use cases (Interstate Pilot Using Driver for Extended Availability (1), Autonomous Valet Parking (2), Full Automation Using Driver for Extended Availability (3) and Vehicle on Demand (4)). This enabled an inter-group comparison of the expectations and attitudes toward the individual scenarios. At the beginning of this section, participants were asked about their willingness to use the briefly described variants of autonomous driving. Autonomous vehicles with valet parking were the most popular (53 %), followed by Full Automation Using Driver for Extended Availability (45 %) and Interstate Pilot Using Driver for Extended Availability (42 %). The lowest intent to use was registered by the Vehicle-on-Demand concept (35 %). According to the conducted analysis of variance (ANOVA), the differences are statistically significant (F(3996) = 4.528; p < 0.01). The Bonferroni post hoc test (pairwise average value comparison) indicates, however, that only the Autonomous-Valet-Parking and Vehicle-on-Demand use cases differ significantly in terms of intent to use (p < 0.01).
In response to the question to what extent various mobility needs would be fulfilled through the use of an autonomous vehicle, some differing assessments emerge in a comparison of the four scenarios. Table 6.3 shows the averages of these evaluations and statistical results (ANOVA and Bonferroni post hoc test). From an overall perspective, it can be seen that autonomous vehicles are perceived as convenient, stress-free and environmentally friendly. Statistically relevant differences in comparing the use cases arise with regard to (lack of) stress, convenience, safety and time-savings. According to respondents' assessments, Autonomous Valet Parking most effectively addresses the need to save time, convenience, freedom from stress and thus explains the high acceptance for this variant of autonomous driving. From a critical standpoint, the safety concerns related to the Vehicle-on-Demand use case stand out.
The emotional evaluation of the use cases was conducted with regard to 10 different emotions (hopefulness, relaxation, satisfaction, happiness, concern, anger, stress, powerlessness, dislike, fear). The participants were asked to indicate which emotions they would experience in the anticipated use of the respective variant of autonomous driving.
The results (see Table 6.4) confirm the tendencies found in the differences that emerged in the comparison of the use cases described above. The strongest positive associations were found in connection with Autonomous Valet Parking. The feelings of satisfaction, relaxation and happiness are also significantly more strongly represented here than in the other scenarios. In the use cases Interstate Pilot Using Driver for Extended Availability, Full Automation Using Driver for Extended Availability and Vehicle on Demand, the emotions of powerlessness and fear dominate. The feeling of being at the mercy of forces beyond one's control is associated with these feelings and represents a major hurdle to acceptance. Aside from Autonomous Valet Parking, only Full Automation Using Driver for Extended Availability evokes above-average positive emotions such as happiness, hopefulness and satisfaction, although the negative emotions do predominate in this use case. These results provide a differentiated picture of the emotional base elements out of which the most important emotion in the field of automation is comprised-trust. Trust in the described variants of autonomous driving was measured in this survey based on four items (e.g. "I can imagine relying on such a system in my everyday mobility")-analogous to the other attitude items on a 6-point Likert scale. A totals index was composed based on these items. As expected, trust is highest in vehicles with Autonomous Valet Parking (M = 3.45; SD = 1.31) and lowest for the Vehicle-on-Demand concept (M = 3.10; SD = 1.42). Trust in vehicles with Interstate Pilot Using Driver for Extended Availability and full automated vehicles is roughly on the same value (M = 3.36; SD = 1.33 against M = 3.28; SD = 1.33). Only the differences between the Autonomous-Valet-Parking and

Summary and Conclusions
The focus of this chapter has been the interaction between humans and autonomous vehicles. Proceeding on the assumption that automated vehicles will for the foreseeable future depend upon the availability and control of the human, we first looked at the cognitive-psychological effects of the human-machine interaction. This was followed by an empirical study of the user perspective on autonomous driving through an extensive online survey. The study focused in particular on the attitudes, expectations and emotions -the mental models-toward the subject of autonomous driving.
Based on the scholarship thus far on the psychological consequences of automation in different domains (e.g. aviation, production), it may be concluded that as we proceed towards Full Automation Using Driver for Extended Availability, designers and developers would do well to place greater emphasis on the human at the core of their endeavors. Even in the partially automated systems available today, drivers display well-known problems such as excessive trust and reduced situation awareness. The long-term effects of higher degrees of automation and the associated lengthier periods of mental decoupling from the task on the cognitive and motor skills required by drivers are still largely unknown. The effects found in this regard for highly trained and experiences airplane pilots, however, are alarming [38]. Training and regular manual execution of automatable driving tasks thus seem to be an important instrument for maintaining required and desired skills of the driver.
As long as the human is a part of the availability concept of automated vehicleswhether as a supervisor of the system or taking over the driving task-both the human and the machine need a suitable representation of the respective other agent. Transparent interfaces adapted to the mental system of the human are the prerequisite for the necessary situation and system awareness in interactions with the automated system. On the other hand, the technical system must also be able to correctly interpret the mental state of the driver, her intentions and behavior and dynamically represent them in a driver model. In adaptive and cooperative design concepts, these aspects have already been implemented in highly automated vehicle prototypes [44,67]. Moreover, vehicle manufacturers and research institutions are currently working on potential solutions to these problems in a range of different projects (www.adaptive-ip.eu; www.incarin.de; www.urban-online.org).
The survey results highlight some emerging contradictions between what is technically feasible and innovations that are actually desired by the public. Although a majority of drivers has become accustomed to handing over certain driving tasks (e.g. cruise control) to assistance systems, most people are highly averse to the idea of actually letting go of the steering wheel. The current cognitive and affective representations of the role of the driver are still very strongly associated with the conventional image of an active chauffeur. The notion of assuming the role of a passive passenger finds little acceptance. The conventional, manually controlled vehicle is still so strongly associated with the ideal image in the public mind that for the majority, completely autonomous vehicles do not fulfill mobility needs. The open question is whether a step-by-step, evolutionary automation of vehicles can achieve the requisite changes to the mental models associated with role expectations in autonomous vehicles. A situation-specific transfer of driving tasks to the autonomous vehicle may, as the example of the high acceptance rates illustrates, represent a more fruitful alternative.
Moreover, the results of the survey offer ideas on possible strategies for the transformation that take their orientation from the needs and emotions of potential users. The main argument for the introduction of autonomous vehicles in previous public debates has been increased road traffic safety. This perception is not shared by the general public, however. Rather, the participants in this study see the benefits of autonomous vehicles as stress reduction, convenience and environmental friendliness. At the same time, associated emotions such as powerlessness and fear are powerful factors that pose a major impediment to acceptance. The human thinking apparatus is not capable of objectively estimating the risk of rare events [58], so fears and concerns can lead to irrational decisions. From this perspective, user-centered development means taking account of existing needs both in terms of communication and the concrete design of the systems.
For the potential user, the question is ultimately the added value of an autonomous vehicle compared to the still highly regarded manually controlled vehicle. What should be the focus of one's attention if one is no longer required, or indeed able, to concern oneself with the control of the vehicle for safety reasons? Contrary to expectations, a majority of participants was not interested in the extended range of infotainment options from internet to television, but instead preferred to enjoy the landscape uninterrupted. Just how stable and valid these assertions prove to be in concrete interactions with automated vehicles will have to be addressed in future studies. But perhaps this need follows in the tradition of German romanticism and will offer a new impetus for the design of an automated, "close to nature" space.
Open Access This chapter is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, duplication, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, a link is provided to the Creative Commons license and any changes made are indicated.
The images or other third party material in this chapter are included in the work's Creative Commons license, unless indicated otherwise in the credit line; if such material is not included in the work's Creative Commons license and the respective action is not permitted by statutory regulation, users will need to obtain permission from the license holder to duplicate, adapt or reproduce the material.