Multi-modal Information Processing in Companion-Systems: A Ticket Purchase System

  • Ingo Siegert
  • Felix Schüssel
  • Miriam Schmidt
  • Stephan Reuter
  • Sascha Meudt
  • Georg Layher
  • Gerald Krell
  • Thilo Hörnle
  • Sebastian Handrich
  • Ayoub Al-Hamadi
  • Klaus Dietmayer
  • Heiko Neumann
  • Günther Palm
  • Friedhelm Schwenker
  • Andreas Wendemuth
Chapter
Part of the Cognitive Technologies book series (COGTECH)

Abstract

We demonstrate a successful multimodal dynamic human-computer interaction (HCI) in which the system adapts to the current situation and the user’s state is provided using the scenario of purchasing a train ticket. This scenario demonstrates that Companion Systems are facing the challenge of analyzing and interpreting explicit and implicit observations obtained from sensors under changing environmental conditions. In a dedicated experimental setup, a wide range of sensors was used to capture the situative context and the user, comprising video and audio capturing devices, laser scanners, a touch screen, and a depth sensor. Explicit signals describe a user’s direct interaction with the system, such as interaction gestures, speech and touch input. Implicit signals are not directly addressed to the system; they comprise the user’s situative context, his or her gesture, speech, body pose, facial expressions and prosody. Both multimodally fused explicit signals and interpreted information from implicit signals steer the application component, which was kept deliberately robust. The application offers stepwise dialogs gathering the most relevant information for purchasing a train ticket, where the dialog steps are sensitive and adaptable within the processing time to the interpreted signals and data. We further highlight the system’s potential for a fast-track ticket purchase when several pieces of information indicate a hurried user.

A video of the complete scenario in German language is available at: http://www.uni-ulm.de/en/in/sfb-transregio-62/pr-and-press/videos.html

Notes

Acknowledgements

The authors thank the following colleagues for their invaluable support in realizing the scenario (in alphabetical order): Michael Glodek, Ralph Heinemann, Peter Kurzok, Sebastian Mechelke, Andreas Meinecke, Axel Panning, and Johannes Voss.

This work was done within the Transregional Collaborative Research Centre SFB/TRR 62 “Companion-Technology for Cognitive Technical Systems” funded by the German Research Foundation (DFG).

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Ingo Siegert
    • 1
  • Felix Schüssel
    • 2
  • Miriam Schmidt
    • 2
  • Stephan Reuter
    • 2
  • Sascha Meudt
    • 2
  • Georg Layher
    • 2
  • Gerald Krell
    • 1
  • Thilo Hörnle
    • 2
  • Sebastian Handrich
    • 1
  • Ayoub Al-Hamadi
    • 1
  • Klaus Dietmayer
    • 2
  • Heiko Neumann
    • 2
  • Günther Palm
    • 2
  • Friedhelm Schwenker
    • 2
  • Andreas Wendemuth
    • 1
  1. 1.Otto von Guericke University MagdeburgMagdeburgGermany
  2. 2.Ulm UniversityUlmGermany

Personalised recommendations