Keywords

1 Introduction

While cars with basic automated functions, such as Adaptive Cruise Control (ACC) and Lane Keeping (LK), are becoming more widely available to consumers, higher levels of automation such as level 3 and 4 [1] are under development. These highly automated systems take over the longitudinal and lateral control of the car. In case of a level 2 system, drivers still need to monitor the driving situation continuously. With a level 3 system, drivers no longer need to continuously monitor the driving situation, but still have to be able to take back control when requested within a given time frame. A level 4 system includes a minimal risk maneuver in case the driver does not take back control after a request. As the systems have an Operational Design Domain (ODD) and do not function in all possible situations, drivers still need to take back control occasionally. In this interaction, the HMI plays a crucial role to help drivers understand their automated vehicle (Carsten and Martens 2018).

Automated cars can provide multiple benefits for both the driver and society as a whole. These include improved traffic safety, potentially reduced fuel consumption and accompanying costs reductions, co2 emission reductions, and improved driver comfort [2,3,4]. In case drivers still have to monitor the situation continuously, they are at least relieved from some of the physical efforts in driving. When they do not need to monitor the situation, they can engage in non-driving tasks while traveling. Studies like those of [5] have already shown that drivers engage in tasks ranging from reading to playing games on a tablet.

However, recent studies show that besides the potential benefits, automated cars may create safety issues in the driver-car interaction [6,7,8]. Expected issues are driver distraction, automation surprise, loss of situational awareness and high workload [6, 9,10,11] when the driver needs to take over. The role of drivers shifts from operator to supervisor. This new role of supervisor, that is required with level 2 systems, is shown to be difficult for humans [12, 13]. Distraction towards non-driving activities with loss of situational awareness is in this case expected. Even more so, shifting from the distraction back to the driving task can be challenging. Especially in level 3 or 4 vehicles when the driver is temporarily not required to monitor the driving situation and is immersed in a non-driving task. Drivers have to disengage both physically and mentally from the non-driving task before resuming manual control.

Studies have shown lowered situational awareness in drivers that were engaged in non-driving tasks for long periods of time [14]. Using the commonly used definition by [15], the situational awareness of drivers can be described as: perceiving the driving situation, understanding this situation, and projecting the status of this situation in the future. When drivers are requested to take-back control, they first need to be able to regain their situational awareness to a level on which they are capable of safely resuming control. To avoid negative effects on safety, acceptance and driver comfort, the car Human Machine Interface (HMI) should be taking these human factors into consideration. In case of distraction or emersion in non-driving activities, the HMI should be able to support the driver in smoothly returning to the driving task and regaining situational awareness efficiently. It can also support during the automated phase to, for example, retain a certain level of situational awareness in the driver. The interesting thing here is that very often, solutions are found in improving system reliability. The more reliable the system will be, the less human factors issues will arise. However, as Carsten and Martens (2018) already indicated, this is not correct. With improving system reliability, comfort and trust will increase, but automation surprise and response times will also increase, and situational awareness, attention and trust calibration will decrease. Therefore, instead of focusing on improving system reliability, we believe that the primary focus should be on a proper interaction between the vehicle and the user, irrespective of the ODD or the system level.

Until now, the development and research on driver support through in-car HMIs has been mainly addressed from a traditional cognitive psychology perspective and human centered design. In this traditional perspective, cognition is considered to be “the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses” [16]. Although specific perspectives of course differ from each other, the mode transition from automation to manual control is commonly described as a sequence of consecutive mental processes and physical actions.

This study investigates driver support in partially automated vehicles from a different perspective: embodied interaction [17,18,19]. This may allow us to identify unused design spaces. Embodied interaction proposes that all knowledge and sense-making of the world emerges from a continuous and simultaneous interaction with the world [20]. In this perspective, cognition is not strictly designated as sequential processes of the brain. Rather, cognition is the entire system of interaction between mind, body and world. As [19] stated “cognition is a highly embodied or situated activity […] thinking beings ought be considered first and foremost as acting beings”. Furthermore, embodied cognition states that all abstract symbols (including words) only gain meaning through embodied experiences and physical aspects. In this embodied perspective, the emphasis of gaining knowledge is thus more focused on the physical acting of a person in a specific situation. By discarding the idea that cognition only occurs in the mind, new design spaces may be discovered. More emphasis can be put on the combination of mind ánd body making sense of situations in ongoing interactions with the environment.

This study reviews current HMI feedback systems of partially automated cars during two phases. One phase consists of the Take-Over Request (TOR) by the car. This includes messages from the car that the driver needs to take back control from the automation. The second phase that is reviewed is the general HMIs during automation. This phase also includes any Hands on Wheel Warnings (HOWW). These warnings indicate that the drivers have to put their hands on the wheel (or ideally eyes on the road). In most systems, the automation disengages if the driver does not comply with the HOWW. Since this is not a formal request to take over and is not linked to system limitations, this will be described as feedback during automation.

The HMI systems are reviewed in the light of three important characteristics of embodied interaction: suppleness [21], bodily experience [18] and situatedness [19]. These characteristics include for example: the fluency with which TORs are introduced, in- and output modalities, and whether the feedback systems are adaptive to the situation. Further details of the review protocol are discussed in the methodology section. While most review papers only discuss academic papers and patents, this review includes currently commercially available systems and systems that are being studied in experiments but are not yet on the market.

The goal of this review is to identify the current state of HMI support during the TOR phase and the automation phase in both literature and commercially available systems. We want to examine how they consider the main characteristics of embodied interaction in their design. This will allow us to identify unexplored design spaces, and new opportunities for the design of HMI systems of partially automated cars. Concluding, this study investigates two main research questions: (1) What embodied design elements are currently used in driver support during TORs and automated driving? (2) What are the unused embodied design spaces for designing HMI support for TORs and automated driving in partially automated cars?

2 Methods

2.1 Data Collection

The materials gathered for this literature review consisted of the following types: journal papers, conference papers, work-in-progress papers, technical reports and product documentation of commercial cars. It was decided to include technical reports and product documentation of commercial cars as the current development of HMI in automated car systems is proceeding fast. Including these material types allowed the study to review the latest developments in both industry and academia. Both the commercial car systems as the concepts in literature were reviewed on (1) the HMI during TORs, and (2) the HMI during automated mode. As the majority of the gathered materials does not specify the level of automation nor the exact Operational Design Domain (ODD), it was decided that the requirement for inclusion was that the system automated both lateral and longitudinal control simultaneously.

The literature papers and reports had to be written in English, and published after 2008. Although other studies have conducted reviews on shorter periods, we believe that it is necessary to include sources of a 10 year period. Condensing this work into a short snapshot would undermine the continuous progress within the field. Literature reviews and meta-analysis studies were excluded. The following leading research databases were used to collect the journal- and conference papers: Web of Science, IEEE, Scopus, Google Scholar.

For the TOR reviews, we solely considered systems that indicate a take-over request due to system limitations. Therefore Hands on Wheel Warnings (HOWWs), which prompt the driver to keep their hands on the wheel without the need to disengage automation, were not reviewed among the TORs. However, the HOWWs were included in the review of the general HMI during automation. These warnings are often included in car systems both for legal reasons and with the intention to keep the driver ready to take back control instantly.

Literature Concepts – TOR.

For the TOR review on literature concepts, the literature papers had to be specifically focusing on the design or testing of HMI support during TOR. Studies that only used HMI as a means to perform their experiment on a different topic were excluded. The following keywords were used in the research databases: ((“Autonomous” OR “Self-driving” OR “Automated”) AND (“HMI” OR “Human machine interaction”) AND (“Design” OR “Feedback”) AND (“Take-over” OR “Take over” OR “Transition” OR “Warning” OR “Request”) AND (“Car”)).

Literature Concepts – HMI During Automation.

For the review on general HMI feedback during automated mode, only studies that specifically address the development and testing of an HMI design were included. Studies that use an HMI purely as a means to perform their experiment on a different topic were excluded. The search entry for materials on HMI systems during automated mode contained the following keywords. ((“Autonomous” OR “Self-driving” OR “Automated”) AND (“HMI” OR “Human machine interaction”) AND (“Interface” OR “Feedback”) AND (“Car”)).

Commercial Cars – TOR and HMI During Automation.

The selection of commercial car brands was done based on their official user manuals and websites. The car system had to be available for purchase at the time of this review. To avoid an incomplete review, only systems that included all necessary information for the categorization were considered. Of the current available systems, only two formally include a TORs [22, 23]. Therefore the TOR review included just these two commercial car systems. (As mentioned before, the systems do include HOWWs. These are reviewed in the ‘general HMI during automation’ section.)

2.2 Data Coding

The materials gathered were labelled on three main use qualities of embodied interaction: suppleness [21, 24, 25], bodily experience [18, 20] and situatedness [19]. Although not exhaustive, these are discussed frequently within the embodied interaction domain and are generally excepted to portrait (some of) the core elements. Each quality will be discussed briefly with their respective measures. Some of the specific variables were used from the study by [26] who created a categorization framework for control transition interfaces. Tables 1 and 2 show all variables that were examined, respectively for the TORs and general HMI during automation.

Table 1. Data coding scheme for TOR feedback.
Table 2. Data coding scheme for HMI during automation.

Suppleness. [24, 25] introduced the use quality of suppleness. They stressed designing for supple back and forth interaction between a user and system, which can be seen as a fluent ‘dance’ [25]. The Webster dictionary definition of supple is considered the base for this use quality: “easy and fluent without stiffness or awkwardness”. In this study, we categorized the TORs on three supple qualities. The first was whether the transfer is introduced abrupt or gradually: Temporal Output Mode [26]. The TORs could be categorized as being shown once, several times, or incremental. It was specified whether the support was given: before/during deactivation of the system, or before a hazard. It was important to take this into account as the time to take-over would be either the time before a collision or deactivation of the system. The second variable was the amount of time the driver has to take back control: Time to take over. More specifically, how much time does the driver have after the TOR until the system disengages or the car crashes? The third item entailed the use of Social cues. The research and design area of embodied interaction is increasingly focusing on incorporation of natural social interactions in artificial intelligent systems [27]. As we are social beings, we engage in continuous social interactions to understand and act on the world [28]. Therefore we investigated whether there is any use of social cues that we use daily in human to human communication in the HMI systems. These could for example facial expressions and gestures.

Bodily experience. Inclusion of the body in making sense of the situation is at the core of embodied interaction [18, 20]. Our entire body and all our senses are included in learning, and creating an understanding of the world. By including multiple senses in a feedback system, overload may be reduced or prevented. Therefore, the way the driver has to disengage automation (Input) was included in this review as well as the modality of the TOR itself (Output). For the input, we used a similar classification as [26] which included physical, touchscreen, gesture and speech. However, touchscreen was made into a sub classification of physical and we additionally included the options for activity recognition and ‘other’ input. Activity recognition includes all forms of system initiated recognition such as eye movement recognition or posture recognition. The physical class contains input through buttons, the steering wheel, the pedals and touchscreen. For the output modalities, we included all five basic modalities: visual, auditory, haptic, smell, taste. As directional forces such as acceleration and deceleration are a large part of the driving experience, the vestibular sense is also included.

Situatedness. As the name would make one suspect, the situatedness [19] describes how the meaning of interactions with technology cannot be seen in isolation from the context in which it occurs: interaction is always situated. Cognition relies on embodied interactions that take place within a specific situation. For example, a symbol or gesture can have a very different meaning in different contexts and for different people. In this study, TORs were investigated on whether or not they are Adaptive to the driver and driving situation. Is the feedback the same for all drivers and all their driver states? Also, is the feedback the same in all driving situations?

3 Results

3.1 Reviewed Materials

An overview of the results can be found in Table 3 until 6 in Appendix A. Seven different commercial car brands were selected for this review. All systems have the option to simultaneously activate the automation of lateral and longitudinal control. As the systems have different names across brands (and sometimes even within the brand) they will be addressed by their company assigned name on their official websites and or user manuals. The included brands and systems are: (1) Audi – AI Traffic Jam Pilot [22, 29], (2) Tesla – Autopilot [30], (3) Cadillac - Super Cruise [23], (4) BMW – Steering and Lane Guidance Assistant [31], (5) Volvo – Pilot Assist, (6) Mercedes – Drive Pilot [32, 33]. All commercial systems will be reviewed on their general HMI during automation (including HOWW). However only two of these systems included a formal TOR since they allow the driver to be temporarily out of the loop due to a traffic jam assist. Therefore only these two system could be reviewed on its TOR [22, 23].

A total of 20 literature papers were reviewed on their TOR concepts in this analysis. Some papers discussed multiple concepts within the same paper. These were considered as individual concepts, resulting in a total of 31 concepts that were reviewed. 15 papers were selected for the general HMIs during automation. Again, as some papers presented multiple concepts, a final total of 17 concepts were reviewed. None of the literature concepts contained HOWWs, therefore these could not be included in the general HMI review.

3.2 Take-Over Request (TOR)

The result tables are situated in appendix A. Table 3 shows the results for TORs in commercial cars. Table 4 shows the results for TORs in literature concepts.

Commercial Cars.

Formally, only two of the assessed systems issue a TOR [22, 23]. Therefore only these two commercial systems will be reviewed here. The remaining systems all require the driver to continuously keep their hands or eyes on the road.

Suppleness. Audi AI traffic jam pilot provides multiple TORs before the system disengages due to system limitations. Cadillac Super Cruise provides one TOR before deactivation. During the second warning, at the end of the take-over period, the system already deactivates. Both systems provide a social cue in the form of a symbol in which hands are holding a steering wheel (or an animation that the hand grab the steering wheel).

Bodily Experience. The TORs are in both cases visually displayed on a screen. Cadillac Super Cruise uses additional use of color and illumination in the steering wheel. The visual cues are complemented in both systems with auditory beep(s) and/or a spoken take-over message. Cadillac Super Cruise includes vibrations in the seat as a TOR. Audi includes a short brake jerk and tightens the safety belt three times during the second warning. Drivers can disengage the automation in both systems by turning the steering wheel, pressing one of the pedals or pushing a button.

Situatedness. The reviewed systems are not adapted to the driver. This means that the same message is given regardless of the current driver (state) or activity he is currently performing. None are adapted to the driving situation. The feedback does not change according to, for example, the reason that the car needs to transfers back control.

In conclusion, we found that the two reviewed commercial car TORs are very similar on the reviewed embodiment aspects. The Audi system is slightly more supple as it provides multiple TORs before the system disengages. Both TOR systems provide visual and auditory cues. These are complemented with vibrations (Cadillac), seatbelt tightening or vestibular feedback through braking (Audi). The situatedness of the TOR feedback is lacking as they did not change their form to the driver, nor to the specific driving situation.

Literature Concepts.

Suppleness. The majority of the concepts (N = 20) consist of a single TOR before a detected hazard (without the automation deactivating). Two concepts are similar to the commercial car systems as they provide one take-over request during which the system is immediately deactivated [34, 35]. Eight of the reviewed concepts give several warnings before deactivation. Five of these warnings increase in intensity and cue modalities over time. The time that drivers have to take back control before deactivation or impact ranges widely from 10 s to ‘a few minutes’. Two studies only report that the drivers had ‘sufficient’ time to take back control [36, 37]. It is not stated how much this specifically is. As a social cue, a few of the concepts (N = 5) include a symbol with hands on the steering as is also seen in the commercial car systems (Fig. 1). One concept uses a distressed voice in a verbal message in order to portrait urgency [38].

Fig. 1.
figure 1

Examples of TORs that include a ‘hands on wheel’ symbol in literature. Left by [36] and right by [42].

Bodily Experience. 23 of the TOR concepts give auditory feedback. This feedback is divided into abstract beeps (N = 16), and verbal messages (N = 2), while the rest of the concepts combines both (N = 5). The majority of the concepts (N = 17) use a display. These include standardized symbols, text and use of color or flashers. The color red is used in all cases to indicate an immediate required take-over. Of the display messages, thirteen are complimented with auditory or haptic feedback. Four of the concepts include lighting. While two concepts have a simple LED on the dashboard, the concept by [39] has a LED strip on the steering wheel that can light in directional patterns. This way, it hints towards the required steering direction after take-over. Two studies included mechanical transformations in their concepts. In the concept by [40], part of the steering wheel was replaced with grips that change direction during the TOR depending on the required steering direction. In the concept by [41], the upper part of the steering wheel moves backwards during automation and is shifted back during the TOR. This is mainly done to emphasize the need to take back control. Eight of the concepts include vibration feedback. This is mainly applied in either the driver seat or steering wheel. However, the concept in [38] gives vibration feedback in a wristband. The vibration feedback in the driver seat is either static or dynamic. In case of dynamic feedback, the vibration shifts along rows, creating the ‘illusion’ of motion or direction. Besides three papers, drivers can take back control in all concepts by engaging with the steering wheel or pedals.

Situatedness. One of the literature TORs is adaptive to the driver. The concept by [35] shows the TOR on the driver’s mobile device if he is using this. More than half of the concepts (N = 16) adapt to the driving situation. Most of these concepts contain a suggested (steering) action based on the situation. The way in which this is done ranges widely. Some provide a suggested steering direction through vibration or lighting direction while others adapt the color or symbol accordingly. [40] even adapts the shape of the steering wheel according to the suggested steering direction. Some concepts do suggest a direct action but rather provide boundaries in which the driver can operate. For example, the concept by [37] shows the intent and expected actions of other road users, while the concept by [43] shows an overlay on the driving lane whether it is safe to continue driving there. Two concepts show the upcoming situation visually and why the driver needs to take-over, for example dense fog or roadworks.

Concluding, the majority of the literature concepts present one or multiple TORs before deactivation of the automation. This is expected as it is easier to implement warnings before deactivation of automation as a pre-set in an experimental setting, compared to in a car driving on the road. The variety of social cues is scarce. More variety is found in the bodily experience but only in the output. The variety consists of physical shape changes, verbal messages, dynamic vibrations and lighting. However, the main outputs are still displays and auditory beeps. Only one of the concepts adapts to the driver. However, more concepts adapt to the specific driving situation. In these cases they mainly provide a suggested action, boundaries after the transfer of control, or reasons for the TOR.

3.3 HMI During Automation

The result tables are situated in Appendix A. Table 5 shows the results for general HMI during automation in commercial cars. Table 6 shows the results for general HMI during automation in literature concepts.

Commercial Cars.

Suppleness. Most of the commercial systems (N = 5) include a Hands on Wheel Warning. While it is not indicated exactly how long these warnings continue before the system disengages or stops the car, all systems provide these warnings several time while increasing the intensity (in any form). All systems use a ‘hands on wheel’ symbol as a social cue to indicate that the driver needs to keep their hands on the wheel.

Bodily experience. All systems use a visual display on a screen with illustrations, symbols, text and changing colors to provide feedback. If drivers keep their hands or eyes to long of the road they will receive auditory beeps as a warning and vibrations in the steering wheel or seat. Cadillac Super Cruise includes illumination and changing colors in the steering wheel as additional feedback on the automation state. Drivers get visual feedback of the current car actions as they see the turning of the steering wheel. Besides the visual feedback, drivers can feel the car’s actions through the turning of the wheel.

Situatedness. All HMI systems during automation are partially adapted to the driver as they sense whether the driver has their hands on the wheel, or their eyes on the road, and prompts a HOWW accordingly. There is some variation to the extend in which the systems are adapted to the driving situation. However, all of them show a combination of automation mode, detected vehicles, lane markings and speed limit.

Conclusion. The general HMI during automation of commercial cars is very much the same across the systems on the investigated aspects. The suppleness with regard to social cues is limited to ‘Hands on the wheel’ symbols. The output is mainly given through displays, auditory beeps and vibrations in the steering wheel. The feedback is partially adaptive to the driver as it issues a ‘hands on wheel’ (or eyes on road) message in case the car detects that the hands are not on the wheel (or the eyes are not on the road). The feedback is adaptive to the driving scenario as all systems present the detected vehicles, obstacles, speed limit and/or lane markings. Figure 2 shows examples of HMI during automation in the Audi (A8) and Tesla systems.

Fig. 2.
figure 2

(copyright Tesla.com).

Examples of HMI during automation. The left dashboard is by Audi for their A8 [44], the right dashboard is by Tesla [45]

Literature Concepts.

Suppleness. Two concepts [36, 46] use the social cue of showing ‘hands on a steering wheel’. While the concept by [36] uses this to indicate manual driving mode, [46] uses it as a soft warning in case of potential hazards. Two concepts [47, 48] use facial expressions in emoticons as social cues to indicate the confidence of the automated system. [49] uses the tendency to engage in joint attention/gaze to redirect the driver’s attention. Their concept contains three physical mini robots on the dashboard that turn their head from and towards the road ahead. The concept by [50] uses small talk to engage with the driver, which consisted of sentences that were either driving related or not.

Bodily experience. Six of the concepts provide multimodal feedback. These are combinations of auditory, visual and/or haptic stimuli. Eleven of the concepts include visual feedback, most of which are on displays (N = 9). Two concepts use lighting in their feedback [46, 49]. In [49] this is used to intensify the movement of the physical dashboard robots (as described above). [46] uses light in the windscreen as a soft warning to direct the driver’s attention towards potential hazards in the driving environment. [49] is the only concept to use movement of mechanical objects in their HMI. Two concepts use tactile stimuli. [51] uses vibrations in different parts of the driver seat to indicate approaching vehicles. The concept by [52] consists of a high resolution haptic surface the driver can touch with his fingers. The authors report that the concept may be used for visually impaired passengers of automated cars, but an exact function of the device is not specified. The use of auditory feedback is split evenly between beeps and verbal statements. The study by [53] uses auditory icon sounds. They describe these sounds as “non-speech sounds that bear some ecological relationship to their referent processes”. An example is a water gurgle sound to represent the message that fuel is running low.

Situatedness. Four of the reviewed concepts are adaptive to the driver in some form. The concept by [49] tries to engage the driver in looking at the road by personification (small robot looking at the driver and then looking at the road) when the driver is inattentive. Similarly, the concept by [51] only starts the vibration feedback if the driver is not looking at the road, to provide information about the surrounding traffic. The concept by [54] shows adaptive information on the driver’s condition during automation. What this information exactly entails is not specified. While the study by [52] (15) is directed specifically at visually impaired drivers, the feedback is not dynamically adapted to the driver during automation. Almost all concepts are adapted to some degree to the driving situation (N = 13). They use a variety of combined methods to show adaptive feedback about the driving situation. Five of the concepts show the currently detected elements of the driving situation through a display, such as road users, lane markings and traffic signs. All of these five concepts also include the planned next action of the car, such as an upcoming turn or brake. [22] and [32] change the location of their feedback, which are respectively vibration and illumination, according to the detected hazards. While [50] uses casual remarks and questions about the driving situation to engage the driver, [55] adapts the verbal level of information according to the situation. For example, in some situations the system only mentions the current action “the car is braking”, while in other situations it gives the reason why it is performing this action “the car is braking because a traffic jam is coming up ahead”. [43] uses a direct overlay on the windscreen to show whether it is safe to continue in that lane after deactivation of the automation. Thee of the concepts display the confidence of the system to continue in automated mode [47, 48, 56].

Conclusion. The general HMI during automation in literature concepts shows a variety of supple social cues. These cues mainly include facial expressions, shared gaze, a ‘hands on wheel’ symbol and small talk. The bodily experience of literature HMI concepts is shows some variation. Only a small part of the literature concepts is adaptive to the driver. The ones that are, mainly show the driver condition or provide feedback if the driver is not paying attention. The feedback is adaptive to the driving situation in most concepts. In these concepts the feedback show the confidence of the automated system, the detected environment and detected hazards. Some concepts change the location of their feedback according to the environment and next actions of the car.

4 Discussion

The goal of this study was firstly to identify the current state of embodied design elements in driver support in partially automated cars. This way, new design spaces may be discovered to guide the design of innovative driver support in automated vehicles. To achieve this, partially automated car systems from literature and industry were reviewed from an embodied perspective. More specifically, we reviewed TOR feedback and the general HMI during automation on suppleness, bodily experience and situatedness.

Several opportunities for new designs were found in the current TOR feedback systems. Firstly, most commercial car systems do not provide a formal TOR since the driver is considered to continuously monitor the road. Rather the system disengages when it can no longer function with only a simple visual or auditory cue. While we recognize that this is most likely a technical limitation, implementing multiple incremental TORs before system disengagement may greatly improve the suppleness [57]. Especially as it can be very difficult for drivers to recognize themselves when the system reaches its limits, the system should indicate its limits as clearly as possible [11, 58]. Second, the use and variety of social cues was very limited in both commercial cars and literature concepts. Social cues may create more easily understood, fluent and accepted car-driver interactions. These may for example include social behaviour such as facial expressions, or gestures such as pointing or turning towards a joint interest [59]. Third, while literature showed an increasing variety of TOR output methods, TORs of commercial cars mainly kept to displays, beeps and steering wheel vibrations. It is important to transfer this development into commercial cars as dividing feedback to different senses may prevent overloading of the driver during take-over. Alternative output modes may be useful as drivers are engaged in non-driving tasks and not holding the steering wheel or looking at the dashboard. Lastly, both literature and commercial cars lacked situated feedback to the driver. This leaves a large opportunity to design driver adaptive feedback systems. The request may for example take the current activity of the driver into consideration. This is especially relevant in higher level automated cars where the driver may be immersed in different activities such as work. In order to create a safe mode transition the system should take the driver into consideration and adapt the feedback accordingly. This can be done not only by timing, but also by changing the location, intensity or modality of the information according to the driver’s activity.

Design opportunities for general HMI during automation were also identified. First, although a few concepts with social cues were presented in literature, only ‘hands on wheel’ symbols were present in the commercial cars. Again, there is an opportunity to transfer more variety of social cues to commercial cars. The literature concepts already included facial expressions, small talk and mutual gaze. It is encouraged to expand the development of these and new cues to aid the driver in understanding the car through continuous fluent interactions [60, 61]. Second, the bodily experience in general HMI during automation mainly consists of visuals, audio and vibrations. This holds for both the commercial cars and literature concepts. An opportunity is found to include other senses that may be less obvious at a first sight such as smell [62, 63], taste and the vestibular sense [64, 65]. Although a few concepts use braking as vestibular feedback, it can be explored further as the lateral and longitudinal forces make up such a large part of the driving experience, and seems to be a natural cue for passengers of vehicles to respond so. Developing other forms of vestibular feedback may improve the situational awareness of drivers in automated mode while they perform non-driving activities [64]. Third, with regard to the situatedness, the HMI in commercial cars and literature are mainly adapting the timing of their message to the driver state. The form or message content however does not change. As previously stated, it may be necessary to design driver dependent feedback due to the different activities the driver may be engaged in during automation.

Some limitations of this review have to be taken into account. We recognize that we may have missed papers or car systems that would have been relevant to this review. The search terms described in the method section were carefully chosen, however they may still not include all relevant papers. New commercial or industrial concepts may have been missed in particular as the development of automated cars is currently proceeding so fast. Another limitation is that, as mentioned before, the three reviewed embodied characteristics (suppleness, bodily experience, situatedness) do not represent every aspect of embodied interaction. No method to review interactional systems on their embodiment exists. However, we chose to take these key elements of embodied interaction as a guideline to explore HMI in partially automated car systems, as they represent the main concepts.

In conclusion, we firmly believe that embodied interaction holds a great promise for all next generation automated vehicles. While often the industry aims to fight human factors issues by improving vehicle technology, we believe that this may even enlarge some classic human factors issues. Therefore, the role of self-explaining and supportive feedback will even become more important as technology improves. Embodied interaction holds a great promise for both the TOR feedback and general HMI during automated. For TOR feedback, new embodied designs are encouraged to focus especially on the development of social cues, in- and output methods and adaptivity to the driver. For the general HMIs during automation, new embodied design opportunities are in the output methods and adaptivity to the driver. By including these embodied elements, we can create HMI designs that foster a more fluent and natural interaction movement between automation and manual driving, reducing the need to invest in extensive training. This entails keeping drivers in the loop during automation so they are not overwhelmed at transfer of control, and support fluent transfer back to manual driving. Including the key characteristics of embodied interaction in future HMI may create safer, more efficient and effective car-driver interactions in automated cars.