Abstract
Cars that include combinations of automated functions, such as Adaptive Cruise Control (ACC) and Lane Keeping (LK), are becoming more and more available to consumers, and higher levels of automation are under development. In the use of these systems, the role of the driver is changing. This new interaction between the driver and the vehicle may result in several human factors problems if not sufficiently supported. These issues include driver distraction, loss of situational awareness and high workload during mode transitions. A large conceptual gap exists on how we can create safe, efficient and fluent interactions between the car and driver both during automation and mode transitions. This study looks at different HMIs from a new perspective: Embodied Interaction. The results of this study identify design spaces that are currently underutilized and may contribute to safe and fluent driver support systems in partially automated cars.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
- Embodied interaction
- Human computer interaction
- Human Machine Interface
- Review
- Self-driving vehicles
- Automated vehicles
1 Introduction
While cars with basic automated functions, such as Adaptive Cruise Control (ACC) and Lane Keeping (LK), are becoming more widely available to consumers, higher levels of automation such as level 3 and 4 [1] are under development. These highly automated systems take over the longitudinal and lateral control of the car. In case of a level 2 system, drivers still need to monitor the driving situation continuously. With a level 3 system, drivers no longer need to continuously monitor the driving situation, but still have to be able to take back control when requested within a given time frame. A level 4 system includes a minimal risk maneuver in case the driver does not take back control after a request. As the systems have an Operational Design Domain (ODD) and do not function in all possible situations, drivers still need to take back control occasionally. In this interaction, the HMI plays a crucial role to help drivers understand their automated vehicle (Carsten and Martens 2018).
Automated cars can provide multiple benefits for both the driver and society as a whole. These include improved traffic safety, potentially reduced fuel consumption and accompanying costs reductions, co2 emission reductions, and improved driver comfort [2,3,4]. In case drivers still have to monitor the situation continuously, they are at least relieved from some of the physical efforts in driving. When they do not need to monitor the situation, they can engage in non-driving tasks while traveling. Studies like those of [5] have already shown that drivers engage in tasks ranging from reading to playing games on a tablet.
However, recent studies show that besides the potential benefits, automated cars may create safety issues in the driver-car interaction [6,7,8]. Expected issues are driver distraction, automation surprise, loss of situational awareness and high workload [6, 9,10,11] when the driver needs to take over. The role of drivers shifts from operator to supervisor. This new role of supervisor, that is required with level 2 systems, is shown to be difficult for humans [12, 13]. Distraction towards non-driving activities with loss of situational awareness is in this case expected. Even more so, shifting from the distraction back to the driving task can be challenging. Especially in level 3 or 4 vehicles when the driver is temporarily not required to monitor the driving situation and is immersed in a non-driving task. Drivers have to disengage both physically and mentally from the non-driving task before resuming manual control.
Studies have shown lowered situational awareness in drivers that were engaged in non-driving tasks for long periods of time [14]. Using the commonly used definition by [15], the situational awareness of drivers can be described as: perceiving the driving situation, understanding this situation, and projecting the status of this situation in the future. When drivers are requested to take-back control, they first need to be able to regain their situational awareness to a level on which they are capable of safely resuming control. To avoid negative effects on safety, acceptance and driver comfort, the car Human Machine Interface (HMI) should be taking these human factors into consideration. In case of distraction or emersion in non-driving activities, the HMI should be able to support the driver in smoothly returning to the driving task and regaining situational awareness efficiently. It can also support during the automated phase to, for example, retain a certain level of situational awareness in the driver. The interesting thing here is that very often, solutions are found in improving system reliability. The more reliable the system will be, the less human factors issues will arise. However, as Carsten and Martens (2018) already indicated, this is not correct. With improving system reliability, comfort and trust will increase, but automation surprise and response times will also increase, and situational awareness, attention and trust calibration will decrease. Therefore, instead of focusing on improving system reliability, we believe that the primary focus should be on a proper interaction between the vehicle and the user, irrespective of the ODD or the system level.
Until now, the development and research on driver support through in-car HMIs has been mainly addressed from a traditional cognitive psychology perspective and human centered design. In this traditional perspective, cognition is considered to be “the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses” [16]. Although specific perspectives of course differ from each other, the mode transition from automation to manual control is commonly described as a sequence of consecutive mental processes and physical actions.
This study investigates driver support in partially automated vehicles from a different perspective: embodied interaction [17,18,19]. This may allow us to identify unused design spaces. Embodied interaction proposes that all knowledge and sense-making of the world emerges from a continuous and simultaneous interaction with the world [20]. In this perspective, cognition is not strictly designated as sequential processes of the brain. Rather, cognition is the entire system of interaction between mind, body and world. As [19] stated “cognition is a highly embodied or situated activity […] thinking beings ought be considered first and foremost as acting beings”. Furthermore, embodied cognition states that all abstract symbols (including words) only gain meaning through embodied experiences and physical aspects. In this embodied perspective, the emphasis of gaining knowledge is thus more focused on the physical acting of a person in a specific situation. By discarding the idea that cognition only occurs in the mind, new design spaces may be discovered. More emphasis can be put on the combination of mind ánd body making sense of situations in ongoing interactions with the environment.
This study reviews current HMI feedback systems of partially automated cars during two phases. One phase consists of the Take-Over Request (TOR) by the car. This includes messages from the car that the driver needs to take back control from the automation. The second phase that is reviewed is the general HMIs during automation. This phase also includes any Hands on Wheel Warnings (HOWW). These warnings indicate that the drivers have to put their hands on the wheel (or ideally eyes on the road). In most systems, the automation disengages if the driver does not comply with the HOWW. Since this is not a formal request to take over and is not linked to system limitations, this will be described as feedback during automation.
The HMI systems are reviewed in the light of three important characteristics of embodied interaction: suppleness [21], bodily experience [18] and situatedness [19]. These characteristics include for example: the fluency with which TORs are introduced, in- and output modalities, and whether the feedback systems are adaptive to the situation. Further details of the review protocol are discussed in the methodology section. While most review papers only discuss academic papers and patents, this review includes currently commercially available systems and systems that are being studied in experiments but are not yet on the market.
The goal of this review is to identify the current state of HMI support during the TOR phase and the automation phase in both literature and commercially available systems. We want to examine how they consider the main characteristics of embodied interaction in their design. This will allow us to identify unexplored design spaces, and new opportunities for the design of HMI systems of partially automated cars. Concluding, this study investigates two main research questions: (1) What embodied design elements are currently used in driver support during TORs and automated driving? (2) What are the unused embodied design spaces for designing HMI support for TORs and automated driving in partially automated cars?
2 Methods
2.1 Data Collection
The materials gathered for this literature review consisted of the following types: journal papers, conference papers, work-in-progress papers, technical reports and product documentation of commercial cars. It was decided to include technical reports and product documentation of commercial cars as the current development of HMI in automated car systems is proceeding fast. Including these material types allowed the study to review the latest developments in both industry and academia. Both the commercial car systems as the concepts in literature were reviewed on (1) the HMI during TORs, and (2) the HMI during automated mode. As the majority of the gathered materials does not specify the level of automation nor the exact Operational Design Domain (ODD), it was decided that the requirement for inclusion was that the system automated both lateral and longitudinal control simultaneously.
The literature papers and reports had to be written in English, and published after 2008. Although other studies have conducted reviews on shorter periods, we believe that it is necessary to include sources of a 10 year period. Condensing this work into a short snapshot would undermine the continuous progress within the field. Literature reviews and meta-analysis studies were excluded. The following leading research databases were used to collect the journal- and conference papers: Web of Science, IEEE, Scopus, Google Scholar.
For the TOR reviews, we solely considered systems that indicate a take-over request due to system limitations. Therefore Hands on Wheel Warnings (HOWWs), which prompt the driver to keep their hands on the wheel without the need to disengage automation, were not reviewed among the TORs. However, the HOWWs were included in the review of the general HMI during automation. These warnings are often included in car systems both for legal reasons and with the intention to keep the driver ready to take back control instantly.
Literature Concepts – TOR.
For the TOR review on literature concepts, the literature papers had to be specifically focusing on the design or testing of HMI support during TOR. Studies that only used HMI as a means to perform their experiment on a different topic were excluded. The following keywords were used in the research databases: ((“Autonomous” OR “Self-driving” OR “Automated”) AND (“HMI” OR “Human machine interaction”) AND (“Design” OR “Feedback”) AND (“Take-over” OR “Take over” OR “Transition” OR “Warning” OR “Request”) AND (“Car”)).
Literature Concepts – HMI During Automation.
For the review on general HMI feedback during automated mode, only studies that specifically address the development and testing of an HMI design were included. Studies that use an HMI purely as a means to perform their experiment on a different topic were excluded. The search entry for materials on HMI systems during automated mode contained the following keywords. ((“Autonomous” OR “Self-driving” OR “Automated”) AND (“HMI” OR “Human machine interaction”) AND (“Interface” OR “Feedback”) AND (“Car”)).
Commercial Cars – TOR and HMI During Automation.
The selection of commercial car brands was done based on their official user manuals and websites. The car system had to be available for purchase at the time of this review. To avoid an incomplete review, only systems that included all necessary information for the categorization were considered. Of the current available systems, only two formally include a TORs [22, 23]. Therefore the TOR review included just these two commercial car systems. (As mentioned before, the systems do include HOWWs. These are reviewed in the ‘general HMI during automation’ section.)
2.2 Data Coding
The materials gathered were labelled on three main use qualities of embodied interaction: suppleness [21, 24, 25], bodily experience [18, 20] and situatedness [19]. Although not exhaustive, these are discussed frequently within the embodied interaction domain and are generally excepted to portrait (some of) the core elements. Each quality will be discussed briefly with their respective measures. Some of the specific variables were used from the study by [26] who created a categorization framework for control transition interfaces. Tables 1 and 2 show all variables that were examined, respectively for the TORs and general HMI during automation.
Suppleness. [24, 25] introduced the use quality of suppleness. They stressed designing for supple back and forth interaction between a user and system, which can be seen as a fluent ‘dance’ [25]. The Webster dictionary definition of supple is considered the base for this use quality: “easy and fluent without stiffness or awkwardness”. In this study, we categorized the TORs on three supple qualities. The first was whether the transfer is introduced abrupt or gradually: Temporal Output Mode [26]. The TORs could be categorized as being shown once, several times, or incremental. It was specified whether the support was given: before/during deactivation of the system, or before a hazard. It was important to take this into account as the time to take-over would be either the time before a collision or deactivation of the system. The second variable was the amount of time the driver has to take back control: Time to take over. More specifically, how much time does the driver have after the TOR until the system disengages or the car crashes? The third item entailed the use of Social cues. The research and design area of embodied interaction is increasingly focusing on incorporation of natural social interactions in artificial intelligent systems [27]. As we are social beings, we engage in continuous social interactions to understand and act on the world [28]. Therefore we investigated whether there is any use of social cues that we use daily in human to human communication in the HMI systems. These could for example facial expressions and gestures.
Bodily experience. Inclusion of the body in making sense of the situation is at the core of embodied interaction [18, 20]. Our entire body and all our senses are included in learning, and creating an understanding of the world. By including multiple senses in a feedback system, overload may be reduced or prevented. Therefore, the way the driver has to disengage automation (Input) was included in this review as well as the modality of the TOR itself (Output). For the input, we used a similar classification as [26] which included physical, touchscreen, gesture and speech. However, touchscreen was made into a sub classification of physical and we additionally included the options for activity recognition and ‘other’ input. Activity recognition includes all forms of system initiated recognition such as eye movement recognition or posture recognition. The physical class contains input through buttons, the steering wheel, the pedals and touchscreen. For the output modalities, we included all five basic modalities: visual, auditory, haptic, smell, taste. As directional forces such as acceleration and deceleration are a large part of the driving experience, the vestibular sense is also included.
Situatedness. As the name would make one suspect, the situatedness [19] describes how the meaning of interactions with technology cannot be seen in isolation from the context in which it occurs: interaction is always situated. Cognition relies on embodied interactions that take place within a specific situation. For example, a symbol or gesture can have a very different meaning in different contexts and for different people. In this study, TORs were investigated on whether or not they are Adaptive to the driver and driving situation. Is the feedback the same for all drivers and all their driver states? Also, is the feedback the same in all driving situations?
3 Results
3.1 Reviewed Materials
An overview of the results can be found in Table 3 until 6 in Appendix A. Seven different commercial car brands were selected for this review. All systems have the option to simultaneously activate the automation of lateral and longitudinal control. As the systems have different names across brands (and sometimes even within the brand) they will be addressed by their company assigned name on their official websites and or user manuals. The included brands and systems are: (1) Audi – AI Traffic Jam Pilot [22, 29], (2) Tesla – Autopilot [30], (3) Cadillac - Super Cruise [23], (4) BMW – Steering and Lane Guidance Assistant [31], (5) Volvo – Pilot Assist, (6) Mercedes – Drive Pilot [32, 33]. All commercial systems will be reviewed on their general HMI during automation (including HOWW). However only two of these systems included a formal TOR since they allow the driver to be temporarily out of the loop due to a traffic jam assist. Therefore only these two system could be reviewed on its TOR [22, 23].
A total of 20 literature papers were reviewed on their TOR concepts in this analysis. Some papers discussed multiple concepts within the same paper. These were considered as individual concepts, resulting in a total of 31 concepts that were reviewed. 15 papers were selected for the general HMIs during automation. Again, as some papers presented multiple concepts, a final total of 17 concepts were reviewed. None of the literature concepts contained HOWWs, therefore these could not be included in the general HMI review.
3.2 Take-Over Request (TOR)
The result tables are situated in appendix A. Table 3 shows the results for TORs in commercial cars. Table 4 shows the results for TORs in literature concepts.
Commercial Cars.
Formally, only two of the assessed systems issue a TOR [22, 23]. Therefore only these two commercial systems will be reviewed here. The remaining systems all require the driver to continuously keep their hands or eyes on the road.
Suppleness. Audi AI traffic jam pilot provides multiple TORs before the system disengages due to system limitations. Cadillac Super Cruise provides one TOR before deactivation. During the second warning, at the end of the take-over period, the system already deactivates. Both systems provide a social cue in the form of a symbol in which hands are holding a steering wheel (or an animation that the hand grab the steering wheel).
Bodily Experience. The TORs are in both cases visually displayed on a screen. Cadillac Super Cruise uses additional use of color and illumination in the steering wheel. The visual cues are complemented in both systems with auditory beep(s) and/or a spoken take-over message. Cadillac Super Cruise includes vibrations in the seat as a TOR. Audi includes a short brake jerk and tightens the safety belt three times during the second warning. Drivers can disengage the automation in both systems by turning the steering wheel, pressing one of the pedals or pushing a button.
Situatedness. The reviewed systems are not adapted to the driver. This means that the same message is given regardless of the current driver (state) or activity he is currently performing. None are adapted to the driving situation. The feedback does not change according to, for example, the reason that the car needs to transfers back control.
In conclusion, we found that the two reviewed commercial car TORs are very similar on the reviewed embodiment aspects. The Audi system is slightly more supple as it provides multiple TORs before the system disengages. Both TOR systems provide visual and auditory cues. These are complemented with vibrations (Cadillac), seatbelt tightening or vestibular feedback through braking (Audi). The situatedness of the TOR feedback is lacking as they did not change their form to the driver, nor to the specific driving situation.
Literature Concepts.
Suppleness. The majority of the concepts (N = 20) consist of a single TOR before a detected hazard (without the automation deactivating). Two concepts are similar to the commercial car systems as they provide one take-over request during which the system is immediately deactivated [34, 35]. Eight of the reviewed concepts give several warnings before deactivation. Five of these warnings increase in intensity and cue modalities over time. The time that drivers have to take back control before deactivation or impact ranges widely from 10 s to ‘a few minutes’. Two studies only report that the drivers had ‘sufficient’ time to take back control [36, 37]. It is not stated how much this specifically is. As a social cue, a few of the concepts (N = 5) include a symbol with hands on the steering as is also seen in the commercial car systems (Fig. 1). One concept uses a distressed voice in a verbal message in order to portrait urgency [38].
Bodily Experience. 23 of the TOR concepts give auditory feedback. This feedback is divided into abstract beeps (N = 16), and verbal messages (N = 2), while the rest of the concepts combines both (N = 5). The majority of the concepts (N = 17) use a display. These include standardized symbols, text and use of color or flashers. The color red is used in all cases to indicate an immediate required take-over. Of the display messages, thirteen are complimented with auditory or haptic feedback. Four of the concepts include lighting. While two concepts have a simple LED on the dashboard, the concept by [39] has a LED strip on the steering wheel that can light in directional patterns. This way, it hints towards the required steering direction after take-over. Two studies included mechanical transformations in their concepts. In the concept by [40], part of the steering wheel was replaced with grips that change direction during the TOR depending on the required steering direction. In the concept by [41], the upper part of the steering wheel moves backwards during automation and is shifted back during the TOR. This is mainly done to emphasize the need to take back control. Eight of the concepts include vibration feedback. This is mainly applied in either the driver seat or steering wheel. However, the concept in [38] gives vibration feedback in a wristband. The vibration feedback in the driver seat is either static or dynamic. In case of dynamic feedback, the vibration shifts along rows, creating the ‘illusion’ of motion or direction. Besides three papers, drivers can take back control in all concepts by engaging with the steering wheel or pedals.
Situatedness. One of the literature TORs is adaptive to the driver. The concept by [35] shows the TOR on the driver’s mobile device if he is using this. More than half of the concepts (N = 16) adapt to the driving situation. Most of these concepts contain a suggested (steering) action based on the situation. The way in which this is done ranges widely. Some provide a suggested steering direction through vibration or lighting direction while others adapt the color or symbol accordingly. [40] even adapts the shape of the steering wheel according to the suggested steering direction. Some concepts do suggest a direct action but rather provide boundaries in which the driver can operate. For example, the concept by [37] shows the intent and expected actions of other road users, while the concept by [43] shows an overlay on the driving lane whether it is safe to continue driving there. Two concepts show the upcoming situation visually and why the driver needs to take-over, for example dense fog or roadworks.
Concluding, the majority of the literature concepts present one or multiple TORs before deactivation of the automation. This is expected as it is easier to implement warnings before deactivation of automation as a pre-set in an experimental setting, compared to in a car driving on the road. The variety of social cues is scarce. More variety is found in the bodily experience but only in the output. The variety consists of physical shape changes, verbal messages, dynamic vibrations and lighting. However, the main outputs are still displays and auditory beeps. Only one of the concepts adapts to the driver. However, more concepts adapt to the specific driving situation. In these cases they mainly provide a suggested action, boundaries after the transfer of control, or reasons for the TOR.
3.3 HMI During Automation
The result tables are situated in Appendix A. Table 5 shows the results for general HMI during automation in commercial cars. Table 6 shows the results for general HMI during automation in literature concepts.
Commercial Cars.
Suppleness. Most of the commercial systems (N = 5) include a Hands on Wheel Warning. While it is not indicated exactly how long these warnings continue before the system disengages or stops the car, all systems provide these warnings several time while increasing the intensity (in any form). All systems use a ‘hands on wheel’ symbol as a social cue to indicate that the driver needs to keep their hands on the wheel.
Bodily experience. All systems use a visual display on a screen with illustrations, symbols, text and changing colors to provide feedback. If drivers keep their hands or eyes to long of the road they will receive auditory beeps as a warning and vibrations in the steering wheel or seat. Cadillac Super Cruise includes illumination and changing colors in the steering wheel as additional feedback on the automation state. Drivers get visual feedback of the current car actions as they see the turning of the steering wheel. Besides the visual feedback, drivers can feel the car’s actions through the turning of the wheel.
Situatedness. All HMI systems during automation are partially adapted to the driver as they sense whether the driver has their hands on the wheel, or their eyes on the road, and prompts a HOWW accordingly. There is some variation to the extend in which the systems are adapted to the driving situation. However, all of them show a combination of automation mode, detected vehicles, lane markings and speed limit.
Conclusion. The general HMI during automation of commercial cars is very much the same across the systems on the investigated aspects. The suppleness with regard to social cues is limited to ‘Hands on the wheel’ symbols. The output is mainly given through displays, auditory beeps and vibrations in the steering wheel. The feedback is partially adaptive to the driver as it issues a ‘hands on wheel’ (or eyes on road) message in case the car detects that the hands are not on the wheel (or the eyes are not on the road). The feedback is adaptive to the driving scenario as all systems present the detected vehicles, obstacles, speed limit and/or lane markings. Figure 2 shows examples of HMI during automation in the Audi (A8) and Tesla systems.
Literature Concepts.
Suppleness. Two concepts [36, 46] use the social cue of showing ‘hands on a steering wheel’. While the concept by [36] uses this to indicate manual driving mode, [46] uses it as a soft warning in case of potential hazards. Two concepts [47, 48] use facial expressions in emoticons as social cues to indicate the confidence of the automated system. [49] uses the tendency to engage in joint attention/gaze to redirect the driver’s attention. Their concept contains three physical mini robots on the dashboard that turn their head from and towards the road ahead. The concept by [50] uses small talk to engage with the driver, which consisted of sentences that were either driving related or not.
Bodily experience. Six of the concepts provide multimodal feedback. These are combinations of auditory, visual and/or haptic stimuli. Eleven of the concepts include visual feedback, most of which are on displays (N = 9). Two concepts use lighting in their feedback [46, 49]. In [49] this is used to intensify the movement of the physical dashboard robots (as described above). [46] uses light in the windscreen as a soft warning to direct the driver’s attention towards potential hazards in the driving environment. [49] is the only concept to use movement of mechanical objects in their HMI. Two concepts use tactile stimuli. [51] uses vibrations in different parts of the driver seat to indicate approaching vehicles. The concept by [52] consists of a high resolution haptic surface the driver can touch with his fingers. The authors report that the concept may be used for visually impaired passengers of automated cars, but an exact function of the device is not specified. The use of auditory feedback is split evenly between beeps and verbal statements. The study by [53] uses auditory icon sounds. They describe these sounds as “non-speech sounds that bear some ecological relationship to their referent processes”. An example is a water gurgle sound to represent the message that fuel is running low.
Situatedness. Four of the reviewed concepts are adaptive to the driver in some form. The concept by [49] tries to engage the driver in looking at the road by personification (small robot looking at the driver and then looking at the road) when the driver is inattentive. Similarly, the concept by [51] only starts the vibration feedback if the driver is not looking at the road, to provide information about the surrounding traffic. The concept by [54] shows adaptive information on the driver’s condition during automation. What this information exactly entails is not specified. While the study by [52] (15) is directed specifically at visually impaired drivers, the feedback is not dynamically adapted to the driver during automation. Almost all concepts are adapted to some degree to the driving situation (N = 13). They use a variety of combined methods to show adaptive feedback about the driving situation. Five of the concepts show the currently detected elements of the driving situation through a display, such as road users, lane markings and traffic signs. All of these five concepts also include the planned next action of the car, such as an upcoming turn or brake. [22] and [32] change the location of their feedback, which are respectively vibration and illumination, according to the detected hazards. While [50] uses casual remarks and questions about the driving situation to engage the driver, [55] adapts the verbal level of information according to the situation. For example, in some situations the system only mentions the current action “the car is braking”, while in other situations it gives the reason why it is performing this action “the car is braking because a traffic jam is coming up ahead”. [43] uses a direct overlay on the windscreen to show whether it is safe to continue in that lane after deactivation of the automation. Thee of the concepts display the confidence of the system to continue in automated mode [47, 48, 56].
Conclusion. The general HMI during automation in literature concepts shows a variety of supple social cues. These cues mainly include facial expressions, shared gaze, a ‘hands on wheel’ symbol and small talk. The bodily experience of literature HMI concepts is shows some variation. Only a small part of the literature concepts is adaptive to the driver. The ones that are, mainly show the driver condition or provide feedback if the driver is not paying attention. The feedback is adaptive to the driving situation in most concepts. In these concepts the feedback show the confidence of the automated system, the detected environment and detected hazards. Some concepts change the location of their feedback according to the environment and next actions of the car.
4 Discussion
The goal of this study was firstly to identify the current state of embodied design elements in driver support in partially automated cars. This way, new design spaces may be discovered to guide the design of innovative driver support in automated vehicles. To achieve this, partially automated car systems from literature and industry were reviewed from an embodied perspective. More specifically, we reviewed TOR feedback and the general HMI during automation on suppleness, bodily experience and situatedness.
Several opportunities for new designs were found in the current TOR feedback systems. Firstly, most commercial car systems do not provide a formal TOR since the driver is considered to continuously monitor the road. Rather the system disengages when it can no longer function with only a simple visual or auditory cue. While we recognize that this is most likely a technical limitation, implementing multiple incremental TORs before system disengagement may greatly improve the suppleness [57]. Especially as it can be very difficult for drivers to recognize themselves when the system reaches its limits, the system should indicate its limits as clearly as possible [11, 58]. Second, the use and variety of social cues was very limited in both commercial cars and literature concepts. Social cues may create more easily understood, fluent and accepted car-driver interactions. These may for example include social behaviour such as facial expressions, or gestures such as pointing or turning towards a joint interest [59]. Third, while literature showed an increasing variety of TOR output methods, TORs of commercial cars mainly kept to displays, beeps and steering wheel vibrations. It is important to transfer this development into commercial cars as dividing feedback to different senses may prevent overloading of the driver during take-over. Alternative output modes may be useful as drivers are engaged in non-driving tasks and not holding the steering wheel or looking at the dashboard. Lastly, both literature and commercial cars lacked situated feedback to the driver. This leaves a large opportunity to design driver adaptive feedback systems. The request may for example take the current activity of the driver into consideration. This is especially relevant in higher level automated cars where the driver may be immersed in different activities such as work. In order to create a safe mode transition the system should take the driver into consideration and adapt the feedback accordingly. This can be done not only by timing, but also by changing the location, intensity or modality of the information according to the driver’s activity.
Design opportunities for general HMI during automation were also identified. First, although a few concepts with social cues were presented in literature, only ‘hands on wheel’ symbols were present in the commercial cars. Again, there is an opportunity to transfer more variety of social cues to commercial cars. The literature concepts already included facial expressions, small talk and mutual gaze. It is encouraged to expand the development of these and new cues to aid the driver in understanding the car through continuous fluent interactions [60, 61]. Second, the bodily experience in general HMI during automation mainly consists of visuals, audio and vibrations. This holds for both the commercial cars and literature concepts. An opportunity is found to include other senses that may be less obvious at a first sight such as smell [62, 63], taste and the vestibular sense [64, 65]. Although a few concepts use braking as vestibular feedback, it can be explored further as the lateral and longitudinal forces make up such a large part of the driving experience, and seems to be a natural cue for passengers of vehicles to respond so. Developing other forms of vestibular feedback may improve the situational awareness of drivers in automated mode while they perform non-driving activities [64]. Third, with regard to the situatedness, the HMI in commercial cars and literature are mainly adapting the timing of their message to the driver state. The form or message content however does not change. As previously stated, it may be necessary to design driver dependent feedback due to the different activities the driver may be engaged in during automation.
Some limitations of this review have to be taken into account. We recognize that we may have missed papers or car systems that would have been relevant to this review. The search terms described in the method section were carefully chosen, however they may still not include all relevant papers. New commercial or industrial concepts may have been missed in particular as the development of automated cars is currently proceeding so fast. Another limitation is that, as mentioned before, the three reviewed embodied characteristics (suppleness, bodily experience, situatedness) do not represent every aspect of embodied interaction. No method to review interactional systems on their embodiment exists. However, we chose to take these key elements of embodied interaction as a guideline to explore HMI in partially automated car systems, as they represent the main concepts.
In conclusion, we firmly believe that embodied interaction holds a great promise for all next generation automated vehicles. While often the industry aims to fight human factors issues by improving vehicle technology, we believe that this may even enlarge some classic human factors issues. Therefore, the role of self-explaining and supportive feedback will even become more important as technology improves. Embodied interaction holds a great promise for both the TOR feedback and general HMI during automated. For TOR feedback, new embodied designs are encouraged to focus especially on the development of social cues, in- and output methods and adaptivity to the driver. For the general HMIs during automation, new embodied design opportunities are in the output methods and adaptivity to the driver. By including these embodied elements, we can create HMI designs that foster a more fluent and natural interaction movement between automation and manual driving, reducing the need to invest in extensive training. This entails keeping drivers in the loop during automation so they are not overwhelmed at transfer of control, and support fluent transfer back to manual driving. Including the key characteristics of embodied interaction in future HMI may create safer, more efficient and effective car-driver interactions in automated cars.
References
SAE International: Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles (2016)
Fagnant, D.J., Kockelman, K.: Preparing a nation for autonomous vehicles: opportunities, barriers and policy recommendations. Transp. Res. Part A Policy Pract. 77, 167–181 (2015). https://doi.org/10.1016/j.tra.2015.04.003
Tientrakool, P., Ho, Y.C., Maxemchuk, N.F.: Highway capacity benefits from using vehicle-to-vehicle communication and sensors for collision avoidance. In: IEEE Vehicular Technology Conference, pp. 0–4 (2011). https://doi.org/10.1109/vetecf.2011.6093130
Van Wee, B., Annema, J.A., Banister, D.: The Transport System and Transport Policy, an Introduction. Edward Elgar Publishing Limited, Cheltenham (2013)
Merat, N., Jamson, A.H., Lai, F.C.H., Carsten, O.: Highly automated driving, secondary task performance, and driver state. Hum. Factors J. Hum. Factors Ergon. Soc. 54, 762–771 (2012). https://doi.org/10.1177/0018720812442087
Martens, M.H., Van Den Beukel, A.P.: The road to automated driving: dual mode and human factors considerations. In: IEEE Conference on Intelligent Transportation Systems, pp 2262–2267 (2013)
Endsley, M.R., Kaber, D.B.: Level of automation effects on performance, situation awareness and workload in a dynamic control task. Ergonomics 42, 462–492 (1999). https://doi.org/10.1080/001401399185595
Saffarian, M., de Winter, J.C.F., Happee, R.: Automated driving: human-factors issues and design solutions. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 56, 2296–2300 (2012). https://doi.org/10.1177/1071181312561483
Banks, V.A., Stanton, N.A., Harvey, C.: Sub-systems on the road to vehicle automation: Hands and feet free but not “mind” free driving. Saf. Sci. 62, 505–514 (2014). https://doi.org/10.1016/j.ssci.2013.10.014
De Winter, J.C.F., Happee, R., Martens, M.H., Stanton, N.A.: Effects of adaptive cruise control and highly automated driving on workload and situation awareness: a review of the empirical evidence. Transp. Res. Part F Traffic Psychol. Behav. 27, 196–217 (2014). https://doi.org/10.1016/j.trf.2014.06.016
Carsten, O., Martens, M.H.: How can humans understand their automated cars ? HMI principles, problems and solutions. Cogn. Technol. Work, 1–18 (2018). https://doi.org/10.1007/s10111-018-0484-0
Brookhuis, K.A., de Waard, D., Janssen, W.H.: Behavioural impacts of advanced driver assistance systems–an overview. Eur. J. Transp. Infrastruct. Res. 1, 245–253 (2001)
Farrell, S., Lewandowsky, S.: A connectionist model of complacency and adaptive recovery under automation. J. Exp. Psychol. Learn. Mem. Cogn. 26, 395–410 (2000)
Stanton, N.A., Young, M.S.: Driver behaviour with adaptive cruise control. Ergonomics 48, 1294–1313 (2005). https://doi.org/10.1080/00140130500252990
Endsley, M.: Situation awareness. In: Salvendy, G. (ed.) Handbook of Human Factors and Ergonomics, pp. 553–568. Wiley (2012). Chapter 19
Cognition (n.d.). In: Oxford Living Dictionaries. https://en.oxforddictionaries.com/definition/cognition. Accessed 21 Sept 2018
Wachsmuth, I., Lenzen, M., Knoblich, G.: Introduction to embodied communication: why communication needs the body. In: Embodied Communication in Humans and Machines, pp. 1–34 (2012). https://doi.org/10.1093/acprof:oso/9780199231751.003.0001
Clark, A.: Embodiment and explanation. In: Calvo, P., Gomila, T. (eds.) Handbook of Cognitive Science: An Embodied Approach, pp. 41–56. Elsevier (2008)
Anderson, M.L.: Embodied cognition: a field guide. Artif. Intell. 149, 91–130 (2003). https://doi.org/10.1016/S0004-3702(03)00054-7
van Dijk, J., van der Lugt, R., Hummels, C.: Beyond distributed representation: embodied cognition design supporting socio - sensorimotor couplings. In: 8th International Conference on Tangible, Embedded and Embodied Interaction, pp. 181–188 (2014). https://doi.org/10.1145/2540930.2540934
Isbister, K., Höök, K.: Supple interfaces: designing and evaluating for richer human connections and experiences. In: Extended Abstracts on Human Factors in Computing Systems, CHI 2007, pp. 2853–2856 (2007). https://doi.org/10.1145/1240866.1241094
Audi MediaCenter: TechDay piloted driving. The traffic jam pilot in the new Audi A8, pp. 1–14 (2017)
Cadillac: CT6 SUPER CRUISE TM Convenience & Personalization Guide (2018)
Isbister, K., Höök, K.: On being supple: in search of rigor without rigidity in meeting new design and evaluation challenges for HCI practitioners. In: Proceedings 27th International Conference on Human Factors in Computing Systems, pp. 2233–2242 (2009). https://doi.org/10.1145/1518701.1519042
Sundström, P., Höök, K.: Hand in hand with the material: designing for suppleness. In: Proceedings 28th International Conference on Human Factors in Computing Systems, CHI 2010, pp. 463–472 (2010). https://doi.org/10.1145/1753326.1753396
Mirnig, A.G., et al.: Control transition interfaces in semiautonomous vehicles: a categorization framework and literature analysis. In: Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI 2017), pp. 209–220 (2017)
Dourish, P.: Where The Action Is: The Foundations of Embodied Interaction. The MIT Press, Cambridge (2004)
De Jaegher, H., Di Paolo, E.: Participatory sense-making: an enactive approach to social cognition. Phenomenol. Cogn. Sci. 6, 485–507 (2007). https://doi.org/10.1007/s11097-007-9076-9
Audi: Owner’s Manual (2018)
Tesla: Model S Owner’s Manual (2018)
BMW (Bayerische Motoren Werke): THE BMW 7 SERIES (2015)
Mercedes-Benz: E-Class Sedan and Wagon Operator’s Manual (2016)
Mercedes-Benz: Mercedes-Benz Intelligent Drive (2017). https://www.mercedes-benz.com/en/mercedes-benz/innovation/mercedes-benz-intelligent-drive/. Accessed 7 Nov 2017
van der Heiden, R.M.A., Iqbal, S.T., Janssen, C.P.: Priming drivers before handover in semi-autonomous cars. In: Proceedings 2017 CHI Conference on Human Factors in Computing Systems - CHI 2017, pp. 392–404 (2017). https://doi.org/10.1145/3025453.3025507
Naujoks, F., Mai, C., Neukum, A.: The effect of urgency of take-over requests during highly automated driving under distraction conditions. In: Proceedings 5th International Conference on Applied Human Factors and Ergonomics, AHFE, pp. 2099–2106 (2014)
Naujoks, F., Forster, Y., Wiedemann, K., Neukum, A.: A human-machine interface for co-operative highly automated driving. Adv Hum. Asp. Transp., 585–595 (2017). https://doi.org/10.1007/978-3-319-41682-3_49
Zimmermann, M., Bengler, K.: A multimodal interaction concept for cooperative driving. In: IEEE Intelligent Vehicles Symposium Proceedings, pp. 1285–1290 (2013). https://doi.org/10.1109/ivs.2013.6629643
Politis, I., Brewster, S., Pollick, F.: Using multimodal displays to signify critical handovers of control to distracted autonomous car drivers. Int. J. Mob. Hum. Comput. Interact 9, 1–16 (2017). https://doi.org/10.4018/ijmhci.2017070101
Borojeni, S.S., Chuang, L., Heuten, W., Boll, S.: Assisting drivers with ambient take-over requests in highly automated driving. In: AutomotiveUI 2016, pp. 237–244 (2016). https://doi.org/10.1145/3003715.3005409
Borojeni, S.S., Wallbaum, T., Heuten, W., Boll, S.: Comparing shape-changing and vibro-tactile steering wheels for take-over requests in highly automated driving. In: Proceedings 9th International Conference Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2017, pp. 221–225 (2017). https://doi.org/10.1145/3122986.3123003
Kerschbaum, P., Lorenz, L., Bengler, K.: A transforming steering wheel for highly automated cars. In: IEEE Intelligent Vehicles Symposium, pp. 1287–1292, August 2015. https://doi.org/10.1109/ivs.2015.7225893
Melcher, V., Rauh, S., Diederichs, F., Widlroither, H., Bauer, W.: Take-over requests for automated driving. Procedia Manuf. 3, 2867–2873 (2015). https://doi.org/10.1016/j.promfg.2015.07.788
Lorenz, L., Kerschbaum, P., Schumann, J.: Designing take over scenarios for automated driving: how does augmented reality support the driver to get back into the loop? In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, pp. 1681–1685, January 2014. https://doi.org/10.1177/1541931214581351
Chnl, C.: 2018 Audi A8 Traffic Jam Pilot Test Drive (2017). https://www.youtube.com/watch?v=BkcZ2OPmIq0. Accessed 21 Sept 2018
Tesla: Uw Autopilot is er (2015). https://www.tesla.com/nl_NL/blog/your-autopilot-has-arrived?redirect=no. Accessed 12 Dec 2018
van den Beukel, A.P., van der Voort, M.C., Eger, A.O.: Supporting the changing driver’s task: exploration of interface designs for supervision and intervention in automated driving. Transp. Res. Part F Traffic Psychol. Behav. 43, 279–301 (2016). https://doi.org/10.1016/j.trf.2016.09.009
Beller, J., Heesen, M., Vollrath, M.: Improving the driver-automation interaction: an approach using automation uncertainty. Hum. Factors J. Hum. Factors Ergon. Soc. 55, 1130–1141 (2013). https://doi.org/10.1177/0018720813482327
Rezvani, T., Driggs-campbell, K., Sadigh, D., Sastry, S.S., Seshia, S.A., Bajcsy, R.: Towards trustworthy automation: user interfaces that convey internal and external awareness. In: IEEE 19th International Conference on Intelligent Transportation Systems (2016)
Karatas, N., Yoshikawa, S., Tamura, S., Otaki, S., Funayama, R., Okada, M.: Sociable driving agents to maintain driver’s attention in autonomous driving. In: 26th IEEE International Symposium on Robot and Human Interactive Communication, pp. 143–149 (2017)
Hester, M., Lee, K., Dyre, B.P.: “Driver take over”: a preliminary exploration of driver trust and performance in autonomous vehicles. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, pp. 1969–1973, October 2017. https://doi.org/10.1177/1541931213601971
Telpaz, A., Rhindress, B., Zelman, I., Tsimhoni, O.: Haptic Seat for Automated Driving: Preparing the Driver to Take Control Effectively (2015). https://doi.org/10.1145/2799250.2799267
Wi, D., Sodemann, A., Chicci, R.: Vibratory haptic feedback assistive device for visually-impaired drivers. In: 2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation, pp. 1–5 (2017)
Nees, M.A., Helbein, B., Porter, A., College, L.: Speech auditory alerts promote memory for alerted events in a video-simulated self-driving car ride. Hum. Factors (2014). https://doi.org/10.1177/0018720816629279
Gowda, N., Kohler, K., Ju, W.: Dashboard design for an autonomous car. In: Proceedings 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2014, pp. 1–4 (2014). https://doi.org/10.1145/2667239.2667313
Hock, P., Kraus, J., Walch, M., Lang, N., Baumann, M.: Elaborating feedback strategies for maintaining automation in highly automated driving. In: Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 105–112 (2016)
Helldin, T., Falkman, G., Riveiro, M., Davidsson, S.: Presenting system uncertainty in automotive UIs for supporting trust calibration in autonomous driving. In: Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2013, pp. 5:210–5:217 (2013). https://doi.org/10.1145/2516540.2516554
Gold, C., Damböck, D., Lorenz, L., Bengler, K.: “Take over!” how long does it take to get the driver back into the loop? Proc. Hum. Factors Ergon. Soc. Annu. Meet. 57, 1938–1942 (2013). https://doi.org/10.1177/1541931213571433
Boelhouwer, A., van den Beukel, A.P., van der Voort, M.C., Martens, M.H.: Should I take over? Does system knowledge help drivers in making take-over decisions while driving a partially automated car? Transp. Res. Part F Traffic. Psychol. Behav. 60, 669–684 (2019). https://doi.org/10.1016/j.trf.2018.11.016
Cassell, J., Bickmore, T., Vilhjálmsson, H., Yan, H.: More than just a pretty face. In: Proceedings 5th International Conference Intelligent User Interfaces, IUI 2000, pp. 52–59 (2000). https://doi.org/10.1145/325737.325781
Flemisch, F.O., Adams, C.A., Conway, S.R., Goodrich, K.H., Palmer, M.T., Schutte, P.C.: The H-metaphor as a guideline for vehicle automation and interaction (2003)
Damböck, D., Kienle, M., Bengler, K., Bubb, H.: The H-metaphor as an example for cooperative vehicle driving. In: Jacko, J.A. (ed.) HCI 2011. LNCS, vol. 6763, pp. 376–385. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-21616-9_42
Dmitrenko, D., Maggioni, E., Thanh Vi, C., Obrist, M.: What did i sniff? Mapping scents onto driving-related messages. In: Proceedings 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, pp. 154–163 (2017). https://doi.org/10.1145/3122986.3122998
Kaye, J.J.: Making scents: aromatic output for HCI. Interactions 11, 48–61 (2004). https://doi.org/10.1145/962342.964333
Peterson, B., Wells, M., Furness, T.A., Hunt, E.: The effects of the interface on navigation in virtual environments. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 42, no. 21, pp. 1496–1500, 1696 (1998). https://doi.org/10.1177/154193129804202107
Chance, S.S., Gaunet, F., Beall, A.C., Loomis, J.M.: Locomotion mode affects the updating of objects encountered during travel: the contribution of vestibular and proprioceptive inputs to path integration. Presence Teleoperators Virtual Environ. 7, 168–178 (1998). https://doi.org/10.1162/105474698565659
BMW (Bayerische Motoren Werke): DE BMW 7 SERIE. ASSISTENTIESYSTEMEN (2018). https://www.bmw.nl/nl/modellen/7-serie/sedan/ontdek/assistentiesystemen.html. Accessed 21 Sept 2018
eGearTv: BMW Driving Assistant Plus (7-Series) - POV Test Drive (2017). https://www.youtube.com/watch?v=JAfr8NXpJuI&t=417s. Accessed 21 Sept 2018
SlashGear: Cadillac Super Cruise First Drive on the 2018 Cadillac CT6 (2017). https://www.youtube.com/watch?v=T90LPU_JT7Q. Accessed 21 Sept 2018
MercBenzKing: 2018 Mercedes S Class Long - NEW Full Review Drive Pilot Assist Lights Distronic Plus Lane Keeping (2018). https://www.youtube.com/watch?v=SW3OfMlGrwA. Accessed 21 Sept 2018
Nick’s Tesla Life: Tesla 8.0 New Dashboard View (2016). https://www.youtube.com/watch?v=GzwpCVzYwg0. Accessed 21 Sept 2018
Tesla Family: “Take Over Immediately” Tesla Autopilot Warning Test Model X (2017). https://www.youtube.com/watch?v=daHgi5qNUHA. Accessed 21 Sept 2018
Volvo: S90 Owner’s Manual (2018)
Volvo Cars: Volvo Cars How-To: Pilot Assist (2017). https://www.youtube.com/watch?v=N5gmgqXY5FI. Accessed 21 Sept 2018
Blanco, M., et al.: Automated vehicles: take-over request and system prompt evaluation. In: Meyer, G., Beiker, S. (eds.) Road Vehicle Automation 3. LNM, pp. 111–120. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-40503-2_9
Toffetti A, et al.: CityMobil: human factor issues regarding highly automated vehicles on eLane. Transp. Res. Rec. J. Transp. Res. Board. 1–8 (2009). https://doi.org/10.1258/itt.2010.100803
Petermeijer, S., Bazilinskyy, P., Bengler, K., de Winter, J.: Take-over again: investigating multimodal and directional TORs to get the driver back into the loop. Appl. Ergon. 62, 204–215 (2017). https://doi.org/10.1016/j.apergo.2017.02.023
Walch, M., Lange, K., Baumann, M., Weber, M.: Autonomous driving: investigating the feasibility of car-driver handover assistance. In: Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2015, pp. 11–18 (2015). https://doi.org/10.1145/2799250.2799268
Duthoit, V., Sieffermann, J.M., Enrègle, É., Michon, C., Blumenthal, D.: Evaluation and optimization of a vibrotactile signal in an autonomous driving context. J. Sens. Stud. 33, 1–10 (2018). https://doi.org/10.1111/joss.12308
Petermeijer, S.M., Cieler, S., de Winter, J.C.F.: Comparing spatially static and dynamic vibrotactile take-over requests in the driver seat. Accid. Anal. Prev. 99, 218–227 (2017). https://doi.org/10.1016/j.aap.2016.12.001
Langlois, S., Soualmi, B.: Augmented reality versus classical HUD to take over from automated driving: an aid to smooth reactions and to anticipate maneuvers1. In: IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC, pp. 1571–1578 (2016). https://doi.org/10.1109/itsc.2016.7795767
Kim, N., Jeong, K., Yang, M., Oh, Y., Kim, J.: “Are You Ready to Take-over?” An exploratory study on visual assistance to enhance driver vigilance figure. In: Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, CHI EA 2017, pp. 1771–1778 (2017). https://doi.org/10.1145/3027063.3053155
Koo, J., Kwac, J., Ju, W., Steinert, M., Leifer, L., Nass, C.: Why did my car just do that? Explaining semi-autonomous driving actions to improve driver understanding, trust, and performance. Int. J. Interact. Des. Manuf. 9, 269–275 (2014). https://doi.org/10.1007/s12008-014-0227-2
Acknowledgements
This research is supported by the Dutch Domain Applied and Engineering Sciences, which is part of the Netherlands Organization for Scientific Research (NWO), and which is partly funded by the Ministry of Economic Affairs (project number 14896).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Appendix A
Appendix A
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Boelhouwer, A., van Dijk, J., Martens, M.H. (2019). Turmoil Behind the Automated Wheel. In: Krömker, H. (eds) HCI in Mobility, Transport, and Automotive Systems. HCII 2019. Lecture Notes in Computer Science(), vol 11596. Springer, Cham. https://doi.org/10.1007/978-3-030-22666-4_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-22666-4_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-22665-7
Online ISBN: 978-3-030-22666-4
eBook Packages: Computer ScienceComputer Science (R0)