International Journal of Social Robotics

, Volume 4, Issue 2, pp 201–217 | Cite as

Generation and Evaluation of Communicative Robot Gesture

  • Maha Salem
  • Stefan Kopp
  • Ipke Wachsmuth
  • Katharina Rohlfing
  • Frank Joublin
Article

Abstract

How is communicative gesture behavior in robots perceived by humans? Although gesture is crucial in social interaction, this research question is still largely unexplored in the field of social robotics. Thus, the main objective of the present work is to investigate how gestural machine behaviors can be used to design more natural communication in social robots. The chosen approach is twofold. Firstly, the technical challenges encountered when implementing a speech-gesture generation model on a robotic platform are tackled. We present a framework that enables the humanoid robot to flexibly produce synthetic speech and co-verbal hand and arm gestures at run-time, while not being limited to a predefined repertoire of motor actions. Secondly, the achieved flexibility in robot gesture is exploited in controlled experiments. To gain a deeper understanding of how communicative robot gesture might impact and shape human perception and evaluation of human-robot interaction, we conducted a between-subjects experimental study using the humanoid robot in a joint task scenario. We manipulated the non-verbal behaviors of the robot in three experimental conditions, so that it would refer to objects by utilizing either (1) unimodal (i.e., speech only) utterances, (2) congruent multimodal (i.e., semantically matching speech and gesture) or (3) incongruent multimodal (i.e., semantically non-matching speech and gesture) utterances. Our findings reveal that the robot is evaluated more positively when non-verbal behaviors such as hand and arm gestures are displayed along with speech, even if they do not semantically match the spoken utterance.

Keywords

Multimodal interaction and conversational skills Non-verbal cues and expressiveness Social human-robot interaction Robot companions and social robots 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bennewitz M, Faber F, Joho D, Behnke S (2007) Fritz—a humanoid communication robot. In: RO-MAN 07: Proc of the 16th IEEE international symposium on robot and human interactive communication Google Scholar
  2. 2.
    Bergmann K, Kopp S, Eyssel F (2010) Individualized gesturing outperforms average gesturing—evaluating gesture production in virtual humans. In: Proceedings of the 10th conference on intelligent virtual agents. Springer, Berlin, pp 104–117 CrossRefGoogle Scholar
  3. 3.
    Billard A, Calinon S, Dillmann R, Schaal S (2008) Robot programming by demonstration. In: Siciliano B, Khatib O (eds) Handbook of robotics. Springer, New York, pp 1371–1394 CrossRefGoogle Scholar
  4. 4.
    Calinon S, Billard A (2007) Learning of gestures by imitation in a humanoid robot. In: Dautenhahn K, Nehaniv C (eds) Imitation and social learning in robots, humans and animals: behavioural, social and communicative dimensions. Cambridge University Press, Cambridge, pp 153–177 CrossRefGoogle Scholar
  5. 5.
    Cassell J, Bickmore T, Campbell L, Vilhjálmsson H, Yan H (2000) Human conversation as a system framework: designing embodied conversational agents. In: Embodied conversational agents. MIT Press, Cambridge, pp 29–63 Google Scholar
  6. 6.
    Gienger M, Janßen H, Goerick S (2005) Task-oriented whole body motion for humanoid robots. In: Proceedings of the IEEE-RAS international conference on humanoid robots, Tsukuba, Japan Google Scholar
  7. 7.
    Goldin-Meadow S (1999) The role of gesture in communication and thinking. Trends Cogn Sci 3:419–429 CrossRefGoogle Scholar
  8. 8.
    Gorostiza J, Barber R, Khamis A, Malfaz M, Pacheco R, Rivas R, Corrales A, Delgado E, Salichs M (2006) Multimodal human-robot interaction framework for a personal robot. In: RO-MAN 06: Proc of the 15th IEEE international symposium on robot and human interactive communication Google Scholar
  9. 9.
    Hartmann B, Mancini M, Pelachaud C (2005) Implementing expressive gesture synthesis for embodied conversational agents. In: Gesture in human-computer interaction and simulation Google Scholar
  10. 10.
    Honda Motor Co, L (2000) The Honda humanoid robot Asimo, year 2000 model. http://world.honda.com/ASIMO/technology/spec.html
  11. 11.
    Hostetter AB (2011) When do gestures communicate? A meta-analysis. Psychol Bull 137(2):297–315 CrossRefGoogle Scholar
  12. 12.
    Hostetter AB, Alibali MW (2008) Visible embodiment: gestures as simulated action. Psychon Bull Rev 15(3):495–514 CrossRefGoogle Scholar
  13. 13.
    Itoh K, Matsumoto H, Zecca M, Takanobu H, Roccella S, Carrozza M, Dario P, Takanishi A (2004) Various emotional expressions with emotion expression humanoid robot we-4rii. In: Proceedings of the 1st IEEE technical exhibition based conference on robotics and automation proceedings TExCRA 2004, pp 35–36 CrossRefGoogle Scholar
  14. 14.
    Kendon A (1980) Gesticulation and speech: two aspects of the process of utterance. In: The relationship of verbal and non-verbal communication, pp 207–227 CrossRefGoogle Scholar
  15. 15.
    Kendon A (2004) Gesture: visible action as utterance. Gesture 6(1):119–144 Google Scholar
  16. 16.
    Kopp S, Wachsmuth I (2000) A knowledge-based approach for lifelike gesture animation. In: Horn W (ed) ECAI 2000—Proceedings of the 14th European conference on artificial intelligence. IOS Press, Amsterdam, pp 663–667 Google Scholar
  17. 17.
    Kopp S, Wachsmuth I (2004) Synthesizing multimodal utterances for conversational agents. Comput Animat Virtual Worlds 15(1):39–52 CrossRefGoogle Scholar
  18. 18.
    Kopp S, Bergmann K, Wachsmuth I (2008) Multimodal communication from multimodal thinking—towards an integrated model of speech and gesture production. Semant Comput 2(1):115–136 CrossRefGoogle Scholar
  19. 19.
    Kramer N, Simons N, Kopp S (2007) The effects of an embodied conversational agent’s nonverbal behavior on user’s evaluation and behavioral mimicry. In: Proc of intelligent virtual agents (IVA 2007), vol 4722. Springer, Berlin, pp 238–251 CrossRefGoogle Scholar
  20. 20.
    Kranstedt A, Kopp S, Wachsmuth I (2002) MURML: a multimodal utterance representation markup language for conversational agents. In: Proceedings of the AAMAS02 workshop on embodied conversational agents—let’s specify and evaluate them, Bologna, Italy Google Scholar
  21. 21.
    Levelt W (1989) Speaking. MIT Press, Cambridge Google Scholar
  22. 22.
    Macdorman K, Ishiguro H (2006) The uncanny advantage of using androids in cognitive and social science research. Interact Stud 7(3):297–337 CrossRefGoogle Scholar
  23. 23.
    Mataric MJ, Pomplun M (1998) Fixation behavior in observation and imitation of human movement. Cogn Brain Res 7(2):191–202 CrossRefGoogle Scholar
  24. 24.
    McNeill D (1992) Hand and mind: what gestures reveal about thought. University of Chicago Press, Chicago Google Scholar
  25. 25.
    McNeill D (2005) Gesture and thought. University of Chicago Press, Chicago Google Scholar
  26. 26.
    Minato T, Shimada M, Ishiguro H, Itakura S (2004) Development of an android robot for studying human-robot interaction. In: Innovations in applied artificial intelligence, pp 424–434 CrossRefGoogle Scholar
  27. 27.
    Miyashita T, Shinozawa K, Hagita N (2006) Gesture translation for heterogeneous robots. In: Proceedings of 6th IEEE-RAS international conference on humanoid robots, pp 462–467 CrossRefGoogle Scholar
  28. 28.
    Mori M (1970) The uncanny valley. Energy 7(4):33–35 (KF MacDorman and T Minato, Trans) Google Scholar
  29. 29.
    Nehaniv CL, Dautenhahn K (1998) The correspondence problem Google Scholar
  30. 30.
    Ng-Thow-Hing V, Luo P, Okita S (2010) Synchronized gesture and speech production for humanoid robots. In: Proceedings of the IEEE/RSJ international conference on intelligent robots and systems, pp 4617–4624 Google Scholar
  31. 31.
    Niewiadomski R, Bevacqua E, Mancini M, Pelachaud C (2009) Greta: an interactive expressive ECA system. In: Proceedings of 8th int conf on autonomous agents and multiagent systems (AAMAS2009), pp 1399–1400 Google Scholar
  32. 32.
    Okuno Y, Kanda T, Imai M, Ishiguro H, Hagita N (2009) Providing route directions: design of robot’s utterance, gesture, and timing. In: Proceedings of the 4th ACM/IEEE international conference on human robot interaction, HRI’09. ACM, New York, pp 53–60 CrossRefGoogle Scholar
  33. 33.
    Pollard N, Hodgins J, Riley M, Atkeson C (2002) Adapting human motion for the control of a humanoid robot. In: Proceedings of international conference on robotics and automation, pp 1390–1397 Google Scholar
  34. 34.
    Reiter E, Dale R (2000) Building natural language generation systems. Cambridge Univ Press, Cambridge CrossRefGoogle Scholar
  35. 35.
    Salem M, Kopp S, Wachsmuth I, Joublin F (2010) Generating robot gesture using a virtual agent framework. In: Proceedings of the IEEE/RSJ international conference on intelligent robots and systems Google Scholar
  36. 36.
    Salem M, Kopp S, Wachsmuth I, Joublin F (2010) Towards an integrated model of speech and gesture production for multi-modal robot behavior. In: Proceedings of the IEEE international symposium on robot and human interactive communication Google Scholar
  37. 37.
    Salem M, Rohlfing K, Kopp S, Joublin F (2011) A friendly gesture: investigating the effect of multimodal robot behavior in human-robot interaction. In: Proceedings of the IEEE international symposium on robot and human interactive communication Google Scholar
  38. 38.
    Saygin A, Chaminade T, Ishiguro H, Driver J, Frith C (2011) The thing that should not be: predictive coding and the uncanny valley in perceiving human and humanoid robot actions. Soc Cogn Affect Neurosci. doi:10.1093/scan/nsr025 Google Scholar
  39. 39.
    Schröder M, Trouvain J (2003) The German text-to-speech synthesis system MARY: a tool for research, development and teaching. Int J Speech Technol 6:365–377 CrossRefGoogle Scholar
  40. 40.
    Sidner C, Lee C, Lesh N (2003) The role of dialog in human robot interaction. In: International workshop on language understanding and agents for real world interaction Google Scholar
  41. 41.
    Sugiyama O, Kanda T, Imai M, Ishiguro H, Hagita N (2007) Natural deictic communication with humanoid robots. In: Proceedings of the IEEE international conference on intelligent robots and systems, pp 1441–1448 Google Scholar
  42. 42.
    Wachsmuth I, Kopp S (2002) Lifelike gesture synthesis and timing for conversational agents. In: Wachsmuth I, Sowa T (eds) Gesture and sign language in human-computer interaction. LNAI, vol 2298. Springer, Berlin, pp 120–133 CrossRefGoogle Scholar

Copyright information

© Springer Science & Business Media BV 2011

Authors and Affiliations

  • Maha Salem
    • 1
  • Stefan Kopp
    • 2
  • Ipke Wachsmuth
    • 3
  • Katharina Rohlfing
    • 4
  • Frank Joublin
    • 5
  1. 1.Research Institute for Cognition and RoboticsBielefeldGermany
  2. 2.Sociable Agents GroupBielefeld UniversityBielefeldGermany
  3. 3.Artificial Intelligence GroupBielefeld UniversityBielefeldGermany
  4. 4.Emergentist Semantics GroupBielefeld UniversityBielefeldGermany
  5. 5.Honda Research Institute EuropeOffenbachGermany

Personalised recommendations