Advertisement

Towards Meaningful Robot Gesture

  • Maha Salem
  • Stefan Kopp
  • Ipke Wachsmuth
  • Frank Joublin
Part of the Cognitive Systems Monographs book series (COSMOS, volume 6)

Abstract

Humanoid robot companions that are intended to engage in natural and fluent human-robot interaction are supposed to combine speech with non-verbal modalities for comprehensible and believable behavior. We present an approach to enable the humanoid robot ASIMO to flexibly produce and synchronize speech and co-verbal gestures at run-time, while not being limited to a predefined repertoire of motor action. Since this research challenge has already been tackled in various ways within the domain of virtual conversational agents, we build upon the experience gained from the development of a speech and gesture production model used for our virtual human Max. Being one of the most sophisticated multi-modal schedulers, the Articulated Communicator Engine (ACE) has replaced the use of lexicons of canned behaviors with an on-the-spot production of flexibly planned behavior representations. As an underlying action generation architecture, we explain how ACE draws upon a tight, bi-directional coupling of ASIMO’s perceptuo-motor system with multi-modal scheduling via both efferent control signals and afferent feedback.

Keywords

Humanoid Robot Virtual Agent Conversational Agent Whole Body Motion Embody Conversational Agent 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bennewitz, M., Faber, F., Joho, D., Behnke, S.: Fritz – A humanoid communication robot. In: RO-MAN 2007: Proc. of the 16th IEEE International Symposium on Robot and Human Interactive Communication (2007)Google Scholar
  2. 2.
    Cassell, J., Bickmore, T., Campbell, L., Vilhjálmsson, H., Yan, H.: Human conversation as a system framework: desigining embodied conversational agents. In: Embodied Conversational Agents, pp. 29–63. MIT Press, Cambridge (2000)Google Scholar
  3. 3.
    Cassell, J., Vilhjalmsson, H., Bickmore, T.: Beat: the behavior expression animation toolkit. In: Proceedings of SIGGRAPH 2001 (2001)Google Scholar
  4. 4.
    Gibet, S., Lebourque, T., Marteau, P.F.: High-level specification and animation of communicative gestures. Journal of Visual Languages and Computing 12(6), 657–687 (2001)CrossRefGoogle Scholar
  5. 5.
    Gienger, M., Janßen, H., Goerick, S.: Task-oriented whole body motion for humanoid robots. In: Proceedings of the IEEE-RAS International Conference on Humanoid Robots, Tsukuba, Japan (2005) (accepted)Google Scholar
  6. 6.
    Gorostiza, J., Barber, R., Khamis, A., Malfaz, M., Pacheco, R., Rivas, R., Corrales, A., Delgado, E., Salichs, M.: Multimodal human-robot interaction framework for a personal robot. In: ROMAN 2006: Proc. of the 15th IEEE International Symposium on Robot and Human Interactive Communication (2006)Google Scholar
  7. 7.
    Kopp, S., Bergmann, K., Wachsmuth, I.: Multimodal communication from multimodal thinking - towards an integrated model of speech and gesture production. Semantic Computing 2(1), 115–136 (2008)CrossRefGoogle Scholar
  8. 8.
    Kopp, S., Wachsmuth, I.: A Knowledge-based Approach for Lifelike Gesture Animation. In: Horn, W. (ed.) ECAI 2000 - Proceedings of the 14th European Conference on Artificial Intelligence, pp. 663–667. IOS Press, Amsterdam (2000)Google Scholar
  9. 9.
    Kopp, S., Wachsmuth, I.: Model-based Animation of Coverbal Gesture. In: Proceedings of Computer Animation 2002, pp. 252–257. IEEE Pres, Los Alamitos (2002)CrossRefGoogle Scholar
  10. 10.
    Kopp, S., Wachsmuth, I.: Synthesizing multimodal utterances for conversational agents. Computer Animation and Virtual Worlds 15(1), 39–52 (2004)CrossRefGoogle Scholar
  11. 11.
    Kranstedt, A., Kopp, S., Wachsmuth, I.: MURML: A Multimodal Utterance Representation Markup Language for Conversational Agents. In: Proceedings of the AAMAS 2002 Workshop on Embodied Conversational Agents - let’s specify and evaluate them, Bologna, Italy (2002)Google Scholar
  12. 12.
    Macdorman, K., Ishiguro, H.: The uncanny advantage of using androids in cognitive and social science research. Interaction Studies 7(3), 297–337 (2006)CrossRefGoogle Scholar
  13. 13.
    McNeill, D.: Hand and Mind: What Gestures Reveal about Thought. University of Chicago Press, Chicago (1992)Google Scholar
  14. 14.
    Minato, T., Shimada, M., Ishiguro, H., Itakura, S.: Development of an android robot for studying human-robot interaction. In: Orchard, B., Yang, C., Ali, M. (eds.) IEA/AIE 2004. LNCS (LNAI), vol. 3029, pp. 424–434. Springer, Heidelberg (2004), http://www.springerlink.com/content/rcvkmh0ucra0gkjb Google Scholar
  15. 15.
    Reiter, E., Dale, R.: Building Natural Language Generation Systems. Cambridge Univ. Press, Cambridge (2000)Google Scholar
  16. 16.
    Sidner, C., Lee, C., Lesh, N.: The role of dialog in human robot interaction. In: International Workshop on Language Understanding and Agents for Real World Interaction (2003)Google Scholar
  17. 17.
    Wachsmuth, I., Kopp, S.: Lifelike Gesture Synthesis and Timing for Conversational Agents. In: Wachsmuth, I., Sowa, T. (eds.) GW 2001. LNCS (LNAI), vol. 2298, pp. 120–133. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  18. 18.
    Zeltzer, D.: Motor control techniques for figure animation. IEEE Computer Graphics Applications 2(9), 53–59 (1982)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Maha Salem
    • 1
  • Stefan Kopp
    • 2
  • Ipke Wachsmuth
    • 3
  • Frank Joublin
    • 4
  1. 1.Research Institute for Cognition and RoboticsBielefeld UniversityGermany
  2. 2.Sociable Agents GroupBielefeld UniversityGermany
  3. 3.Artificial Intelligence GroupBielefeld UniversityGermany
  4. 4.Honda Research Institute EuropeOffenbachGermany

Personalised recommendations