Journal on Multimodal User Interfaces

, Volume 13, Issue 2, pp 53–54 | Cite as

Multimodal interaction in automotive applications

  • Dirk Schnelle-WalkaEmail author
  • David R. McGee
  • Bastian Pfleging

With the smartphone, cloud computing, and wireless networking becoming ubiquitous, pervasive distributed computing is approaching reality. Multiple modes are already available in today’s automotive dashboards, including haptic controllers, touch screens, 3D gestures, speech, audio, secondary displays, and gaze, among others. And increasingly, the internet of things is weaving its way into many aspects of our daily lives, encouraging users to project these expectations for natural interaction towards all kind of digital interfaces, including cars. However, these expectations are not fully met by car manufacturers, though the clear trend is for manufacturers to add technology to cars that deliver on their vision and promise of a safer, more enjoyable drive and multimodal interaction technology is a key part of delivering this vision.

In fact, car manufacturers are aiming for a personal assistant with deep understanding of the car and an ability to meet driving-related demands and non-driving-related needs. For instance, such assistants could naturally answer questions about the car and help schedule service when needed. It could find the preferred gas station along the route, or even better—plan a stop and ensure arriving in time for a meeting. It understands that a perfect business meal involves more than finding a sponsored restaurant, and, instead, includes unbiased reviews, availability, budget, trouble-free parking and notifies all invitees of the meeting time and location. Finally, multimodality can be a source for fatigue detection. The main goal for multimodal interaction and driver assistance systems is on ensuring that the driver can focus on his primary task of a safe drive.

This is why some of the biggest innovations in today’s cars expand how drivers and their passengers are able to access information and control non-driving functions in their vehicle. In this special issue, the authors depict the challenges and opportunities of multimodal interaction to help reducing cognitive load and increase learnability as well as current research that has the potential to be employed in tomorrow’s cars.

We present 7 selected and peer reviewed submissions that investigate this from various viewpoints incorporating various modalities.

Touch input has been available in cars for some time. Designers have had freedom, for example, in arranging the controls, thus allowing them to be tailored to specific tasks. In contrast to touch, in-air gestures are generally not restricted to the interaction of the driver with a predefined surface but may happen wherever the driver’s hands are needed. Jason Sterkenburg, Steven Landry and Myounghoon Jeon explore the capabilities of this modality combined with auditory displays and eye tracking in Design and Evaluation of Auditory-supported Air Gesture Controls in Vehicles.

Francesco Biondi, Douglas Getty, Joel Cooper, David Strayer’s manuscript focuses on Investigating the Role of Design Components on Workload and Usability of In-Vehicle Auditory-Vocal Systems. Similarly Michael Braun, Nora Broy, Bastian Pfleging and Florian Alt look at how such voice-based systems can be efficiently augmented by visualizations in Visualizing Natural Language Interaction for Conversational In-Vehicle Information Systems to Minimize Driver Distraction. In a similar regard, it is important to consider how efficient switching between the available modalities in the car is promoted. Such a study focusing on touch and speech is presented by Florian Roider, Sonja Rümelin, Bastian Pfleging and Tom Gross in Investigating the Effects of Modality Switches on Driver Distraction and Interaction Efficiency in the Car.

Since voice-based interfaces may also act as personal assistants Jana Fank, Natalie Tara Richardson and Frank Diermeyer investigate in Anthropomorphising driver-truck interaction: A study on the current state of research and the introduction of two innovative concepts how these affect the relationship between trucks and their drivers to set the basis for the development of multimodal interaction concepts.

The modalities of eye-gaze, ambient displays and focal icons are the focus of the research presented by Andreas Löcken Fei Yan Wilko Heuten, and Susanne Boll in Investigating Driver Gaze Behavior During Lane Changes Using Two Visual Cues: Ambient Light and Focal Icons to make changing lanes safer by comparing potential ambient visual cues.

The atypical impact of the modality of emotion and how music contributes to driving performance is examined by S. Maryam Fakhr Hosseini and Myounghoon Jeon in How Do Angry Drivers Respond to Emotional Music? A Comprehensive Perspective on Assessing Emotion. The authors seek to understand whether music can improve the performance of angry drivers, compared to emotionally-neutral ones.

We would also like to express our special thanks to all the reviewers who made excellent suggestions to the authors on how each of them might improve their contribution.
  • Timo Baumann

  • Okko Buss

  • Michael Braun

  • Phil Cohen

  • Frank Flemisch

  • Anna-Katharina Frison

  • Thomas Gable

  • Tobias Grosse-Puppendahl

  • Renate Häuslschmid

  • Mariam Hassib

  • Jochen Huber

  • Andrew Kun

  • David Large

  • David R. McGee

  • Florian Müller

  • Andreas Riener

  • Florian Roider

  • Sonja Rümelin

  • Dirk Schnelle-Walka

  • Ronald Schroeter

  • Valentin Schwind

  • Tim Claudius Stratmann

  • Jacques Terken

Thank you all!

Bastian, David and Dirk


Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Dirk Schnelle-Walka
    • 1
    Email author
  • David R. McGee
    • 2
  • Bastian Pfleging
    • 3
    • 4
  1. 1.Neustadt an der WeinstraßeGermany
  2. 2.BellevueUSA
  3. 3.Eindhoven University of TechnologyEindhovenThe Netherlands
  4. 4.Ludwig-Maximilians-Universität MünchenMunichGermany

Personalised recommendations