Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Humans represent knowledge and learning experiences in the form of mental models. This concept from the field of cognitive psychology is one of the central theoretical paradigms for understanding and designing the interaction between humans and technical systems [1]. In this context, mental models serve, firstly, to describe human information processing, e.g. to answer questions like how fast incoming information is perceived and stored, or which information a human thinking apparatus needs to react adequately to changed environmental conditions. Secondly, mental models are a means of conceptualizing representations of knowledge and functional assumptions in order to, for example, understand and predict the behavior of users in their interactions with automated systems.

The automation of vehicle guidance fundamentally changes the demands on the cognitive system of the vehicle driver. As the degree of automation rises, the role of the human as a physically active decision-maker in the vehicle is ultimately replaced by automated systems. Previously important patterns of behavior (e.g. for carrying out steering maneuvers) are no longer required and may unlearned, while at the same time new skills (e.g. system monitoring) and a new understanding of the system have to be learned. Underlying mental models must be modified or restructured. For the safety and acceptance of autonomous vehicles, it will be crucial to define the new roles for humans in autonomous vehicles such that they both correspond to the capabilities of the human information processing system and also conform to the expectations and needs of humans. This chapter will examine these two aspects. In view of the insights regarding automation that have been gained in various domains, this paper will consider which cognitive and emotional dimensions need to be taken into account in designing automated vehicles. On the basis of a Germany-wide survey conducted together with the co-authors of this book Rita Cyganski, Eva Fraedrich and Barbara Lenz, it will also look at the mental models with which potential users approach autonomous vehicles.

This chapter is divided into two main sections. The first part presents an overview of the central models, design concepts and findings regarding automation in view of the challenges and problematic areas of human-machine interaction. This is followed by a summary of research into the cognitive effects in (partially) automated vehicles. Part one concludes with an elaboration of the theoretical background of the concept of mental models. The second part is dedicated to the results of the online survey. The mobility, control and experience requirements, as well as the emotional responses, of potential users of autonomous vehicles are categorized according to the use cases developed in the project. The chapter ends with a summary of the results and conclusions.

2 The Human Factor in Autonomous Vehicles

2.1 The Design of Automated Systems

The question of user-appropriate design of automatic systems has been the subject of scientific discussions for decades [e.g. 2, 3]. With the ever-expanding capabilities of technical systems, the issue is becoming increasingly important. Experience from various domains—notably including aviation—with the (partial) automation of technical systems has demonstrated that the safety and reliability of such systems cannot be achieved solely through the optimization of technical components. Indeed, the reliability of automated systems is largely determined by the quality of the interaction between the human and the machine. This applies in particular to situations in which the human is obliged to correct errors by the technical system and assume system control in the event of breakdowns or malfunctions.

Automation brings with it a shift of functions to technical systems that significantly changes the role and required capabilities of the human. For instance, in modern airplane cockpits, computer systems (e.g. flight management systems or autopilots) take over tasks that were previously carried out by the cockpit crew. The requirements for the pilot thereby shift from active manual control functions to tasks of programming and monitoring the aircraft automation. In aviation, for example, this human-performed monitoring function known as “supervisory control” [4] has made piloting easier and led to significantly enhanced flight safety [5]. At the same time, the psychological effects of the passive role of the system monitor, such as reduced attentiveness or activation, have caused massive safety problems [6]. Brainbridge [7] speaks of the “irony of automation”—system functions are automated due to the fallibility of humans, and yet precisely this human is supposed to monitor the system and stand by as a fallback option in case of emergency.

The problems arising from the supervisory control design concept are extensively documented in the “human factors” scholarship and are subsumed under the heading “out-of-the-loop-unfamiliarity” (OOTLUF, [8]). The negative consequences of disconnecting humans from direct guidance and control are primarily concentrated in three areas which have been identified in different application contexts: insufficient or excessive trust in the automation [9], the loss of manual and cognitive capabilities [10] and difficulties in maintaining an appropriate degree of situation and system awareness [11]. An inappropriate degree of confidence in the system can result in insufficient monitoring or use of automated systems. Trust in automation is influenced by the reliability, comprehensibility and perceived usefulness of the system. The effects of the loss of manual and cognitive capabilities become salient at the moment when the user, faced with a malfunction of the automation, is suddenly forced to resume control of automated functions. Insufficient training and practice of skills can lead to decreased effectiveness in terms of both motor and cognitive skills. The “out-of-the-loop” effects are particularly noticeable with regard to perception and the correct interpretation of system processes—i.e. situation awareness. The reasons for insufficient situation awareness flow primarily from insufficient monitoring of the system, changes to or complete breakdown of feedback (e.g. tactile stimuli from the steering wheel), the lack of transparency of the automation and inadequate understanding of the system due to complexity. From a cognitive psychology standpoint, humans lack the corresponding mental models (i.e. knowledge and skill structures) to understand how the automation works [12].

The negative experiences that resulted from technology-centered design approaches have led to a reconsideration of system design. Due to this imperative to keep the human “in-the-loop” by ensuring controllability, transparency and predictability, the concept of human-centered automation has largely established itself as the dominant design principle for automated systems [e.g. 13, 14]. The fundamental premise here is that the human bears ultimate responsibility for the overall system regardless of the degree of automation. In this context, man and machine are regarded metaphorically as cooperating partners [15]. Design concepts for adaptive automation pursue this aspect even further and allocate functions to the human and the machine dynamically depending on situational requirements [16]. Extensive studies of the application of these design strategies have identified the benefits, but also underscored the difficulties and future challenges associated with them [e.g. 17].

The rising complexity and autonomy of socio-technical systems, however, casts doubt on the appropriateness of the imperative of human responsibility and confronts existing concepts with the problem of designing conflict-free interaction between two autonomously deciding system elements—the human and the machine [18, 19]. The human-centered design approach therefore requires a more thoroughgoing development or indeed overhaul [20], which in turn may only be possible by way of a broad-based societal discussion on fundamental questions with regard to the desired role of automation in everyday life [21]. Use contexts and frequencies as well as the skills and expertise of users, however, vary substantially across the different domains, so it may be necessary to devise specific design concepts for the automotive sector that adequately reflect the heterogeneity of car drivers.

2.2 Automation in the Car

In the automotive sector as well, the transition of the human role from active operator to passive supervisor of the system is advancing apace. Media reporting on the subject of autonomous driving conveys the impression that driverless vehicles will improve road safety in the near future [e.g. 22]. Yet although even today individual functions in vehicles are performed by automated functions such as adaptive cruise control, in the foreseeable future the technology will not be able to dispense with the availability of the human driver, who will continue to assume control functions and make strategic decisions [23].

Still open is the questions of how best to define the role of the human along the path to completely autonomous vehicles in a way that is both psychologically apt and commensurate with user requirements. While the insights and experiences from the aviation sector described above provide an interesting starting point for addressing this question, their usefulness for design concepts in the automotive field is limited due to the greater complexity and dynamism of the environment in road traffic. A growing number of studies in recent years has focused on the interplay between partially and highly automated driving functions and human behavior [see also 24, 25]. Here too, the focus of these deliberations is the familiar problematic issues with regard to automation across a range of different automation levels: trust, skill atrophy and situation awareness.

Automation is only useful if the operators trust the technical system and thus also use it. The central challenge in designing automated systems is to generate sufficient trust in the systems. At the same time, errors in the automation can lead to an erosion of trust [26]. Excessive trust, meanwhile, can lead to insufficient monitoring and control of the automation (“overtrust” or “complacency” [27]). The majority of studies on the subject to date have focused on the reciprocal effects of trust in the use of Adaptive Cruise Control (ACC). A certain degree of trust can even be an important prerequisite for the willingness to use driver assistance systems [28]. In a longitudinal section study in a driving simulator, Kazi et al. [29] investigated the effect of the reliability of ACC on the perceived trust in these systems. The results show an increase in trust over time for reliable systems, but not commensurate with the objective reliability of the automation. Koustanai et al. [30] come to similar results in their study, which looked at changes in behavior and trust through the systematic graduation of experience levels in the use of collision warning systems. The participant group with the highest level of experience produced no accidents in the simulator and in critical situations reacted more appropriately than drivers with less experience. The level of system experience was also positively correlated with the expressed trust in the system, albeit without influencing the acceptance of the automation. In contrast to these findings are the results of several studies that found no significant change in trust levels in ACC through repeated use [e.g. 31, 32]. The causes of these inconsistent results could include moderating factors that have been examined in recent studies. Flemisch et al. [33] and Beggiatio et al. [34] emphasize the significance of analogous (previously established) mental models regarding the functionality of the respective automation. Verberne et al. [35] and Waytz et al. [36] take things a step further. On the basis of experimental studies, they show that divided intentions and needs between the human and the machine, and anthropomorphic characteristics of the automation, can be further important factors in establishing trust in automated systems.

Guiding a vehicle demands a wide range of capabilities and skills of the driver, both on the perceptual-motor level (e.g. steering, shifting gears, etc.) and the cognitive level (e.g. making decisions, focusing attention selectively, etc.). Automated execution of these tasks can lead to the loss of the respective skills and at the same increase dependence on the technical system [37]. The fundamental significance of the subject was underscored by a recent safety alert issued by the United States’ Federal Aviation Administration [38]. The alert calls on pilots to choose the manual flight mode instead of autopilot more frequently as the loss of skills due to insufficient practice represents an increasing safety risk for aviation. Although the author is not aware of any studies on the problems of skill loss in (partially) automated vehicles, it may be presumed that these effects also occur in the field of vehicle automation. Adaptive or cooperative automation concepts offer the opportunity to counteract such problems and help maintain critical driving skills until completely autonomous vehicles become a reality.

The ability to correctly perceive and interpret complex and dynamic driving situations is predicated on a series of cognitive processes (e.g. attentiveness, memory, mental models) [12]. Monotonous monitoring tasks or distraction by other activities (e.g. using a telephone) can result in these processes not being adequately available for situation awareness in the vehicle. These effects can occur even in the use of systems with a low degree of automation such as Adaptive Cruise Control (ACC). Buld et al. [39] were able to demonstrate that drivers using ACC neglected certain aspects of the driving activity and environmental conditions and consequently incorrectly interpreted system limits. Increased lane drift and late reactions to critical events were interpreted in a study by Ward [40] as indicators of reduced situation awareness while driving with ACC. The analyses of Ma and Kaber [41], however, suggest that situation awareness can also be improved through the use of ACC. A more differentiated picture of these contradictory results is provided by recent studies on the consequences of highly automated driving. In a simulation study, Merat et al. [42] examined the effects of performing a secondary task on driving behavior during automated driving. The study showed that reactions to critical incidents in highly automated and manual driving conditions without a secondary task were comparable. Distraction by a secondary task, however, resulted in significantly higher-speed driving following manual takeover from the automated system. The authors attributed the finding to the reduced situation awareness due to the distraction posed by the secondary task.

The problematic issues raised here represent just a sampling of the challenges that need to be resolved with regard to the interplay between humans and automated vehicles. Many questions with respect to the mental adjustments and changes will only be answerable following the concrete implementation and scientific study of the next-higher levels of vehicle automation (see automation levels BASt, [43]). The design of interfaces, appropriate feedback and avoiding diffusion of responsibility are topics that are being addressed today in new design concepts and implemented in the prototype stage for highly automated vehicles [e.g. 44]. Which learning experiences, reciprocal effects and changes to mental models will ultimately emerge from the use of these systems, however, can only be determined through representative, longitudinal studies.

2.3 What Are Mental Models?

Mental models are cognitive-emotional representations of objects, object relationships and processes—in short, internal representations of the external world. The concept of mental models was first used by the psychologist Craik [45], who postulated that people develop simplified models of the functioning and processes of their environment in their minds. The models are used for orientation, understanding, reasoning and the prediction of events. Craik’s approach to mental models was later further developed by Johnson-Lairds [46] to describe and study deductive reasoning and language comprehension.

In the cognitive psychology literature, there is widespread consensus [see also 47] that mental models are dynamic in nature and can be described in terms of three central characteristics. First, mental models are created in the working memory and enable individuals to simulate possible actions and their consequences [1]. Thinking is thus the manipulation of mental models. Second, mental models can represent the cause and causal relationships. They generate a causal understanding of how systems function [48]. Third, mental models can change over time due to experience—i.e. they are capable of learning. The quality of the models and the conclusions based on them continue to develop through specific learning experiences [49]. With increasing expertise, the understanding of technical matters moves from concrete to abstract representations—a relevant factor for the human-machine interaction.

The applied fields of study such as technology design in some cases follow different interpretations of the definition of mental models [see also 1] which can be explained by the different activity contexts. Yet even earlier work underscored the significance of the concept of prediction and the understanding of human behavior in interactions with technical systems [e.g. 50]. Mental models are thus based on context-specific expectations and prior experience as well as the current perception of system characteristics. They form the foundation of the user’s understanding of the system and decision-making. This means that both the error-free use and trust in technical systems is largely determined by the degree to which the functioning of the machine is compatible with the user’s expectations [33].

Compatibility in the context of mental models is defined in terms not only of operability, but also the user’s experience and general acceptance of technology. Zhang and Xu [51] postulate in this regard a modification or restructuring of existing mental models with the introduction and use of new technologies. A lack of compatibility can lead to frustration and negatively impacts acceptance and diffusion rates [52]. However, if new systems correspond with expectations (i.e. the existing mental models), this results in heightened system trust and a positive user experience [53].

Mental models thus comprise representations of human knowledge, attitudes, values and emotions that interact with their environment. With respect to the automation of vehicles, both the cognitive-psychological processes of information processing and the influence of higher mental structures (e.g. needs, expectations, wishes, etc.) are important. The interdependency of these different levels has been emphasized in theoretical models on the role of the driver in automated vehicles [e.g. 54, 55]. Ultimately the appropriate modification and adaptation of mental models will play a major role in determining the nature and frequency of use, as well as the acceptance of these systems. The successful transition—as yet undefined—of the driver’s role in automated vehicles therefore requires an integrative examination of the scholarship on human behavior in partially and highly automated systems as well as the emergent ideas and requirements with regard to Full Automation Using Driver for Extended Availabilities. Put another way, human centered technology design implies not only a consideration of the technical possibilities and limits, but also a focus on individual and societal values and objectives.

3 Mental Models of Autonomous Driving

Many people regard autonomous vehicles as a concept for the distant future. Though many people may have imagined how appealing it would be to be able to sleep or read a newspaper during a drive, knowledge about autonomous vehicles remains sparse among the general population. Decisions regarding the use and acceptance of innovations, however, are not based solely on rational knowledge [56]. Contrary to the notion of humans as rational, benefit-maximizing decision-makers—homo economicus—, humans tend to employ simpler decision-making strategies which reduce the amount of information to be processed and are influenced by emotional processes [5759]. Attitudes and decisions are not infinitely amenable to change merely through the provision of more information. Rather, new information is received and processed selectively so as to be in agreement with existing desires, expectations and goals—the human’s mental models [60]. It is therefore crucial to the success of an innovation that the cognitive perceptions and evaluations of it can not only be integrated into existing mental models, but also appeal to the emotional side of the equation [61, 62].

In addition to numerous studies on the technical, legal and cognitive aspects of the automation of vehicles, to date there have been few studies that have examined the preferences and expectations of potential users. In the largest representative international survey on the subject to date [63], the focus was primarily on the acceptance of and willingness to use automated vehicles. The results for Germany show that automated vehicles are by a majority considered as a beneficial technological advance. At the same time, half of the respondents express fear regarding automated driving and doubt that the technology will function reliably. In a comparison of multiple use scenarios, long highway trips are most commonly mentioned as the preferred potential use of autonomous driving. Interestingly, the authors find a positive correlation between the acceptance of driver assistance systems and acceptance towards automated driving. One potential explanation for that could be that the formation of suitable mental models for the characteristics of partially automated systems also has a positive impact on the acceptance level for higher automation levels [see also 34].

Which attitudes and cognitive and emotional representations underpin the acceptance or rejection of automated vehicles is still unknown. In addition to the aforementioned cognitive-psychological requirements for the design of the human-machine interaction, however, these factors represent an important prerequisite for the success of the transformation in the transportation sector. The aim of the quasi-representative online survey study introduced here was to generate a differentiated, to some extent explorative, picture of the perceptions of autonomous driving across the use cases generated in the project. The questionnaire was developed with the following overarching questions in mind: “With which mental models do potential users encounter the new role of the driver in autonomous vehicles?”; “Which automated elements of vehicle guidance are most amenable to the mental models of the users?”; “Which control functions and intervention options by the driver do potential users expect in autonomous vehicles and how can acceptance of this line of innovation be increased?”; “Which experience and design elements in autonomous vehicles can replace previous representations on the role of the driver and thus increase acceptance of this line of innovation?”.

3.1 Methods

3.1.1 Questionnaire

The questionnaire was devised in collaboration with other authors of this book (Ms. Cyganski, topic: demand modeling; Ms. Fraedrich and Ms. Lenz, topic: acceptance). The survey was conducted online in April 2014 via an electronic questionnaire. The questionnaire was divided into two main sections: (1) General part: This part consisted of five question groups: socio-demographic questions; questions on prior knowledge, interest and general acceptance of automated driving; questions on need-related attitudes regarding various forms of transportation; questions on the emotional representations of mobility-related concepts; and questions on the topic of time-use and general transportation usage. (2) Special part: The questions in this part related to the four use cases developed in the project and were divided into the following ten topic groups: Free associations on the use case; willingness to use the technology; anticipated use scenario; anticipated impact on prior transportation usage; assumed fulfillment of need; emotional reactions; trust and acceptance; need for control and intervention; and preferred secondary tasks during automated driving. To reduce the processing time, the questions regarding the four different use cases (see below) in the second part were not answered by all participants. After answering the questions in the first part, the sample was split and the study participants randomly assigned in equal numbers (N = 250 in each case) to one of the four use cases. The questionnaire comprised 438 items, with each participant answering 210 questions following distribution of the use cases. The survey questions were partly taken from earlier mobility surveys [62, 64] and partly new and were—in particular the questions from part two—checked for comprehensibility in a pretest.

For all attitude questions, a six-point scale was used (1 = Completely disagree, 6 = Completely agree; with some questions, the codes differed due to the content) to assess agreement with the statement. The affective significance of the terms in the field of mobility was surveyed using the semantic differential method [65]. In the three dimensions of valence, potency and arousal, bi-polar, nine-point (from -4 = extremely to 0 = neutral to 4 = extremely) scales were used in which the extremes were designated by the adjectives unpleasantpleasant (valence), weakpowerful (potency), and calmingexciting (arousal). Current traffic behavior was recorded via selection options and frequency categories.

3.1.2 Sample

Participants were recruited through a commercial market research panel of the company, Respondi AG (http://www.respondi.com/de/), and paid by the same for their participation. The company assembled a participant group that was representative of the overall German population with respect to age, gender, education and income. A total of N = 1,363 people completed the survey in its entirety. Some people, however, answered the questions in such a short time that it doubtful that the questions were answered conscientiously. As a consequence, all participants whose processing time was less than 1,000 s were not included in further analysis. The sample was therefore reduced by N = 230 to N = 1133. In a further step, the distortion of the original ratios that resulted from the exclusion was corrected by removing N = 133 randomly selected females to achieve a roughly representative distribution at least with respect to gender proportionality. The average processing time of the remaining sample (N = 1000) was 1897 s (=31.6 min.) (SD = 780 s). The precise demographic composition of the sample can be taken from Table 6.1.

Table 6.1 Demographic and mobility-specific characteristics of the sample

3.1.3 Data Analysis Affective Similarity

The affective similarity between the terms evaluated via the semantic differential method was calculated as follows using the three-dimensional Euclidian distance d between the average EPA profile (E = valence, P = potence, A = arousal) of the term “ideal drive” and the average EPA profiles of the other terms:

$$ d = \sqrt {(I_{e} - B_{e} )^{2} } + (I_{p} - B_{p} )^{2} + (I_{a} - B_{a} )^{2} $$

whereas \( I \) refers to the evaluation of the “ideal drive,” B the respective evaluation of the other terms and the subscript letters define the EPA dimensions.

3.2 Results

The first line of inquiry was to what degree the topic of autonomous driving is even known among the general public, whether there is broad interest and how people spontaneously feel about the technology. Less than half of respondents (44 %) claimed to have no knowledge of the subject, while the majority had already heard of it (33 %), read about it (16 %) or claimed to have a higher level of expertise (4 %). A similar distribution was found in regard to interest in the subject of autonomous driving. A majority of participants (58 %) described themselves as “somewhat,” “quite” or “very” interested in the subject. However, a majority (56 %) also cannot imagine replacing their current preferred means of transportation with an autonomous vehicle. Thus in spite of a relatively high degree of interest and some prior knowledge, a majority of the public manifests a certain reluctance towards the use of autonomous vehicles.

3.2.1 Driver Assistance Systems and Giving up Driving Responsibilities

As discussed above, the use and acceptance of driver assistance systems can have a positive effect on the general perception of autonomous driving. The results of the present study show that most respondents (67 %) have already heard of driver assistance systems. Among people who use a passenger car on a daily basis (82 %), cruise control (50 %), acoustic parking assistants (46 %) and high-beam assistants are the most frequently used systems. Other systems such as adaptive cruise control (ACC, 15 %), night vision assistant (11 %), head-up display (10 %) or attention assistant (8 %) are only used by a minority in everyday situations.

The expressed desire to give up certain driving tasks and functions to an automated system yields similar results. Figure 6.1 shows the task-specific distribution of desires in the category spectrum from “absolutely not” to “very willingly.” In a comparison of the different driving tasks it becomes clear that aside from the overwhelming rejection (62 % in the categories “absolutely not” and “preferably not”) of the idea of completely ceding vehicle control to a driving robot, people are particularly averse to giving up the task of steering the vehicle (58.3 % in the categories “absolutely not” and “preferably not”) to an automated system. At the same time, respondents view transferring parking tasks (45 % in the categories “willingly” and “very willingly”) as well as safety-related assistance in the area of vehicle stabilization (43 % in the categories “willingly” and “very willingly”) and pedestrian recognition (43 % in the categories “willingly” and “very willingly”) more favorably.

Fig. 6.1
figure 1

Desire to transfer function to an automated system

3.2.2 Representations of the Driver’s Role and Use Cases

Employing the semantic differential method, the study surveyed the affective significance of various terms related to different roles in the vehicle and the scenarios described in the use cases among all participants. The concept of the “ideal drive” and the conventional “car” were also evaluated in this fashion. The raw results (average evaluations on the scales valence, potency and activation) are displayed in Table 6.2.

Table 6.2 Arithmetic mean (M) of the affective evaluations

The results were used to calculate the Euclidian distances and thus the affective similarity between the term “ideal drive” and the other terms (for methodological details see [ 61, 66]). A visualization of these calculations is provided by Fig. 6.2, in which the Euclidian distance d of the evaluated terms is represented on the x-axis. Low values indicate a smaller distance and thus higher affective similarity between the terms, i.e. they elicit a stronger positive association for the respondents. It is clearly evident that “chauffeur” comes closest to “ideal drive” from an affective standpoint, while “co-pilot” least corresponds to this emotional representation. In a comparison of the various use cases for autonomous driving, it emerges quite clearly that the Vehicle-on-Demand concept deviates most strongly from the idea of an ideal drive, while vehicles with Autonomous Valet Parking are most closely associated with it. The significantly more positive affective positioning enjoyed by conventional cars in comparison to the use cases could therefore represent a major impediment to acceptance with the introduction of Full Automation Using Driver for Extended Availability in particular. As concerns the role of the driver, the affective representations revealed in the study underscore the role preference explicitly addressed in another question. In this item, participants use a slider to indicate which role they would like to assume in an autonomous vehicle (1 = passenger and 10 = supervisor). The arithmetic mean of 6.36 (SD = 2.9) indicates a preference for the role of an active supervisor who is able to maintain control over the vehicle at all times based on continuously available system information. On the affective level, the role of the passive passenger (d = 2.1) is still visibly remote from the desired ideal (d = 0).

Fig. 6.2
figure 2

Euclidian distances to the affective representation of the “ideal drive”

3.2.3 Cognitive and Emotional Representations of the Use Cases

As described above, the overall sample in this part of the questionnaire was randomly divided into four subgroups of equal size (each N = 250) and assigned to one of the four use cases (Interstate Pilot Using Driver for Extended Availability (1), Autonomous Valet Parking (2), Full Automation Using Driver for Extended Availability (3) and Vehicle on Demand (4)). This enabled an inter-group comparison of the expectations and attitudes toward the individual scenarios. At the beginning of this section, participants were asked about their willingness to use the briefly described variants of autonomous driving. Autonomous vehicles with valet parking were the most popular (53 %), followed by Full Automation Using Driver for Extended Availability (45 %) and Interstate Pilot Using Driver for Extended Availability (42 %). The lowest intent to use was registered by the Vehicle-on-Demand concept (35 %). According to the conducted analysis of variance (ANOVA), the differences are statistically significant (F(3996) = 4.528; p < 0.01). The Bonferroni post hoc test (pairwise average value comparison) indicates, however, that only the Autonomous-Valet-Parking and Vehicle-on-Demand use cases differ significantly in terms of intent to use (p < 0.01).

In response to the question to what extent various mobility needs would be fulfilled through the use of an autonomous vehicle, some differing assessments emerge in a comparison of the four scenarios. Table 6.3 shows the averages of these evaluations and statistical results (ANOVA and Bonferroni post hoc test). From an overall perspective, it can be seen that autonomous vehicles are perceived as convenient, stress-free and environmentally friendly. Statistically relevant differences in comparing the use cases arise with regard to (lack of) stress, convenience, safety and time-savings. According to respondents’ assessments, Autonomous Valet Parking most effectively addresses the need to save time, convenience, freedom from stress and thus explains the high acceptance for this variant of autonomous driving. From a critical standpoint, the safety concerns related to the Vehicle-on-Demand use case stand out.

Table 6.3 Arithmetic mean (M) standard deviation (SD) from need fulfillment

The emotional evaluation of the use cases was conducted with regard to 10 different emotions (hopefulness, relaxation, satisfaction, happiness, concern, anger, stress, powerlessness, dislike, fear). The participants were asked to indicate which emotions they would experience in the anticipated use of the respective variant of autonomous driving. The results (see Table 6.4) confirm the tendencies found in the differences that emerged in the comparison of the use cases described above. The strongest positive associations were found in connection with Autonomous Valet Parking. The feelings of satisfaction, relaxation and happiness are also significantly more strongly represented here than in the other scenarios. In the use cases Interstate Pilot Using Driver for Extended Availability, Full Automation Using Driver for Extended Availability and Vehicle on Demand, the emotions of powerlessness and fear dominate. The feeling of being at the mercy of forces beyond one’s control is associated with these feelings and represents a major hurdle to acceptance. Aside from Autonomous Valet Parking, only Full Automation Using Driver for Extended Availability evokes above-average positive emotions such as happiness, hopefulness and satisfaction, although the negative emotions do predominate in this use case.

Table 6.4 Arithmetic mean (M) and standard deviation (SD) of emotional responses

These results provide a differentiated picture of the emotional base elements out of which the most important emotion in the field of automation is comprised—trust. Trust in the described variants of autonomous driving was measured in this survey based on four items (e.g. “I can imagine relying on such a system in my everyday mobility”)—analogous to the other attitude items on a 6-point Likert scale. A totals index was composed based on these items. As expected, trust is highest in vehicles with Autonomous Valet Parking (M = 3.45; SD = 1.31) and lowest for the Vehicle-on-Demand concept (M = 3.10; SD = 1.42). Trust in vehicles with Interstate Pilot Using Driver for Extended Availability and full automated vehicles is roughly on the same value (M = 3.36; SD = 1.33 against M = 3.28; SD = 1.33). Only the differences between the Autonomous-Valet-Parking and Vehicle-on-Demand scenarios (Bonferroni post hoc test, p < 0.05) are statistically significant.

3.2.4 Intervention, Control and Experience Needs

For a clear majority of those surveyed (Interstate Pilot Using Driver for Extended Availability: 82 %; Autonomous Valet Parking: 81 %; Full Automation Using Driver for Extended Availability: 88 %; Vehicle on Demand: 84 %), the possibility of reassuming control of the vehicle or terminating the automated driving procedure at any time is one of the central needs. At the same time, in the scenarios with an available driver (Interstate Pilot Using Driver for Extended Availability: 32 %; Full Automation Using Driver for Extended Availability: 48 %), only a minority would wish to cease paying attention to traffic and completely cede control of the vehicle to the automated system. This is also reflected in the need expressed by majorities for both of these use cases of not wishing to change the conventional seating position during automated driving (Interstate Pilot Using Driver for Extended Availability: 76 %; Full Automation Using Driver for Extended Availability: 79 %). In all four scenarios, the majority of participants expressed the desire to be able to adjust the automated system to reflect personal preferences in terms of driving style (e.g. comfortable vs. sporty) and route selection (e.g. fastest vs. most environmentally friendly; Interstate Pilot Using Driver for Extended Availability: 71 %; Autonomous Valet Parking: 76 %; Full Automation Using Driver for Extended Availability: 72 %; Vehicle on Demand: 82 %).

The most important perceived benefit of using autonomous vehicles is the possibility of enjoying the landscape during the drive (Interstate Pilot Using Driver for Extended Availability: 64 %; Full Automation Using Driver for Extended Availability: 72 %; Vehicle on Demand: 72 %; Autonomous Valet Parking: NA). The option of being able to converse unhindered with other vehicle occupants continues to be viewed highly positively (Interstate Pilot Using Driver for Extended Availability: 63 %; Full Automation Using Driver for Extended Availability: 65 %; Vehicle on Demand: 68 %; Autonomous Valet Parking: NA). Astonishingly, activities such as surfing the internet (Interstate Pilot Using Driver for Extended Availability: 28 %; Full Automation Using Driver for Extended Availability: 39 %; Vehicle on Demand: 46 %; Autonomous Valet Parking: NA), viewing films (Interstate Pilot Using Driver for Extended Availability: 23 %; Full Automation Using Driver for Extended Availability: 32 %; Vehicle on Demand: 36 %; Autonomous Valet Parking: NA), working (Interstate Pilot Using Driver for Extended Availability: 22 %; Full Automation Using Driver for Extended Availability: 33 %; Vehicle on Demand: 36.4 %; Autonomous Valet Parking: NA) or relaxing or sleeping (Interstate Pilot Using Driver for Extended Availability: 31 %; Full Automation Using Driver for Extended Availability: 47 %; Vehicle on Demand: 54 %; Autonomous Valet Parking: NA) are only regarded as positive aspects of autonomous driving by a minority. The most important benefits of Autonomous Valet Parking are seen to be simplifying the search for parking spaces (80 %), the safety of the parking location (78 %), the resulting free time (76 %) and the cheaper parking options outside of the inner city areas (76 %).

3.3 Summary and Conclusions

The focus of this chapter has been the interaction between humans and autonomous vehicles. Proceeding on the assumption that automated vehicles will for the foreseeable future depend upon the availability and control of the human, we first looked at the cognitive-psychological effects of the human-machine interaction. This was followed by an empirical study of the user perspective on autonomous driving through an extensive online survey. The study focused in particular on the attitudes, expectations and emotions—the mental models—toward the subject of autonomous driving.

Based on the scholarship thus far on the psychological consequences of automation in different domains (e.g. aviation, production), it may be concluded that as we proceed towards Full Automation Using Driver for Extended Availability, designers and developers would do well to place greater emphasis on the human at the core of their endeavors. Even in the partially automated systems available today, drivers display well-known problems such as excessive trust and reduced situation awareness. The long-term effects of higher degrees of automation and the associated lengthier periods of mental decoupling from the task on the cognitive and motor skills required by drivers are still largely unknown. The effects found in this regard for highly trained and experiences airplane pilots, however, are alarming [38]. Training and regular manual execution of automatable driving tasks thus seem to be an important instrument for maintaining required and desired skills of the driver.

As long as the human is a part of the availability concept of automated vehicles—whether as a supervisor of the system or taking over the driving task—both the human and the machine need a suitable representation of the respective other agent. Transparent interfaces adapted to the mental system of the human are the prerequisite for the necessary situation and system awareness in interactions with the automated system. On the other hand, the technical system must also be able to correctly interpret the mental state of the driver, her intentions and behavior and dynamically represent them in a driver model. In adaptive and cooperative design concepts, these aspects have already been implemented in highly automated vehicle prototypes [44, 67]. Moreover, vehicle manufacturers and research institutions are currently working on potential solutions to these problems in a range of different projects (www.adaptive-ip.eu; www.incarin.de; www.urban-online.org).

The survey results highlight some emerging contradictions between what is technically feasible and innovations that are actually desired by the public. Although a majority of drivers has become accustomed to handing over certain driving tasks (e.g. cruise control) to assistance systems, most people are highly averse to the idea of actually letting go of the steering wheel. The current cognitive and affective representations of the role of the driver are still very strongly associated with the conventional image of an active chauffeur. The notion of assuming the role of a passive passenger finds little acceptance. The conventional, manually controlled vehicle is still so strongly associated with the ideal image in the public mind that for the majority, completely autonomous vehicles do not fulfill mobility needs. The open question is whether a step-by-step, evolutionary automation of vehicles can achieve the requisite changes to the mental models associated with role expectations in autonomous vehicles. A situation-specific transfer of driving tasks to the autonomous vehicle may, as the example of the high acceptance rates illustrates, represent a more fruitful alternative.

Moreover, the results of the survey offer ideas on possible strategies for the transformation that take their orientation from the needs and emotions of potential users. The main argument for the introduction of autonomous vehicles in previous public debates has been increased road traffic safety. This perception is not shared by the general public, however. Rather, the participants in this study see the benefits of autonomous vehicles as stress reduction, convenience and environmental friendliness. At the same time, associated emotions such as powerlessness and fear are powerful factors that pose a major impediment to acceptance. The human thinking apparatus is not capable of objectively estimating the risk of rare events [58], so fears and concerns can lead to irrational decisions. From this perspective, user-centered development means taking account of existing needs both in terms of communication and the concrete design of the systems.

For the potential user, the question is ultimately the added value of an autonomous vehicle compared to the still highly regarded manually controlled vehicle. What should be the focus of one’s attention if one is no longer required, or indeed able, to concern oneself with the control of the vehicle for safety reasons? Contrary to expectations, a majority of participants was not interested in the extended range of infotainment options from internet to television, but instead preferred to enjoy the landscape uninterrupted. Just how stable and valid these assertions prove to be in concrete interactions with automated vehicles will have to be addressed in future studies. But perhaps this need follows in the tradition of German romanticism and will offer a new impetus for the design of an automated, “close to nature” space.