International Journal of Social Robotics

, Volume 4, Issue 3, pp 249–262

Facial Communicative Signals

Valence Recognition in Task-Oriented Human-Robot Interaction
  • Christian Lang
  • Sven Wachsmuth
  • Marc Hanheide
  • Heiko Wersing

DOI: 10.1007/s12369-012-0145-z

Cite this article as:
Lang, C., Wachsmuth, S., Hanheide, M. et al. Int J of Soc Robotics (2012) 4: 249. doi:10.1007/s12369-012-0145-z


This paper investigates facial communicative signals (head gestures, eye gaze, and facial expressions) as nonverbal feedback in human-robot interaction. Motivated by a discussion of the literature, we suggest scenario-specific investigations due to the complex nature of these signals and present an object-teaching scenario where subjects teach the names of objects to a robot, which in turn shall term these objects correctly afterwards. The robot’s verbal answers are to elicit facial communicative signals of its interaction partners. We investigated the human ability to recognize this spontaneous facial feedback and also the performance of two automatic recognition approaches. The first one is a static approach yielding baseline results, whereas the second considers the temporal dynamics and achieved classification rates comparable to the human performance.


Facial communicative signals Valence recognition Head gestures Eye gaze Facial expressions Object teaching Active appearance models 

Copyright information

© Springer Science & Business Media BV 2012

Authors and Affiliations

  • Christian Lang
    • 1
  • Sven Wachsmuth
    • 2
  • Marc Hanheide
    • 3
  • Heiko Wersing
    • 4
  1. 1.Research Institute for Cognition and Robotics (CoR-Lab)Bielefeld UniversityBielefeldGermany
  2. 2.Applied InformaticsBielefeld UniversityBielefeldGermany
  3. 3.School of Computer ScienceUniversity of LincolnLincolnUK
  4. 4.Honda Research Institute EuropeOffenbachGermany

Personalised recommendations