Effects of Gesture on the Perception of Psychological Anthropomorphism: A Case Study with a Humanoid Robot

  • Maha Salem
  • Friederike Eyssel
  • Katharina Rohlfing
  • Stefan Kopp
  • Frank Joublin
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7072)


Previous work has shown that gestural behaviors affect anthropomorphic inferences about artificial communicators such as virtual agents. In an experiment with a humanoid robot, we investigated to what extent gesture would affect anthropomorphic inferences about the robot. Particularly, we examined the effects of the robot’s hand and arm gestures on the attribution of typically human traits, likability of the robot, shared reality, and future contact intentions after interacting with the robot. For this, we manipulated the non-verbal behaviors of the humanoid robot in three experimental conditions: (1) no gesture, (2) congruent gesture, and (3) incongruent gesture. We hypothesized higher ratings on all dependent measures in the two gesture (vs. no gesture) conditions. The results confirm our predictions: when the robot used gestures during interaction, it was anthropomorphized more, participants perceived it as more likable, reported greater shared reality with it, and showed increased future contact intentions than when the robot gave instructions without using gestures. Surprisingly, this effect was particularly pronounced when the robot’s gestures were partly incongruent with speech. These findings show that communicative non-verbal behaviors in robotic systems affect both anthropomorphic perceptions and the mental models humans form of a humanoid robot during interaction.


Multimodal Interaction and Conversational Skills Non-verbal Cues and Expressiveness Anthropomorphism 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bartneck, C., Croft, E., Kulic, D.: Measuring the anthropomorphism, animacy, likeability, perceived intelligence and safety of robots. In: Proceedings of the Metrics of Human-Robot Interaction Workshop, Technical Report 471, pp. 37–41 (2008)Google Scholar
  2. 2.
    Bergmann, K., Kopp, S., Eyssel, F.: Individualized gesturing outperforms average gesturing – evaluating gesture production in virtual humans. In: Safonova, A. (ed.) IVA 2010. LNCS, vol. 6356, pp. 104–117. Springer, Heidelberg (2010)Google Scholar
  3. 3.
    Breazeal, C.: Designing sociable robots. pp. 1–263 (2002)Google Scholar
  4. 4.
    Duffy, B.R.: Anthropomorphism and the social robot. Robotics and Autonomous Systems 42(3-4), 177–190 (2003)CrossRefzbMATHGoogle Scholar
  5. 5.
    Echterhoff, G., Higgins, E.T., Levine, J.M.: Shared reality: Experiencing commonality with others’ inner states about the world. Perspectives on Psychological Science 4, 496–521 (2009)CrossRefGoogle Scholar
  6. 6.
    Goldin-Meadow, S.: The role of gesture in communication and thinking. Trends in Cognitive Science 3, 419–429 (1999)CrossRefGoogle Scholar
  7. 7.
    Haslam, N., Bain, P., Loughnan, S., Kashima, Y.: Attributing and denying humanness to others. European Review of Social Psychology 19, 55–85 (2008)CrossRefGoogle Scholar
  8. 8.
    Ltd. Honda Motor Co. The Honda Humanoid Robot Asimo, year 2000 model (2000),
  9. 9.
    Kopp, S., Wachsmuth, I.: Synthesizing Multimodal Utterances for Conversational Agents. Computer Animation and Virtual Worlds 15(1), 39–52 (2004)CrossRefGoogle Scholar
  10. 10.
    Krämer, N., Simons, N., Kopp, S.: The Effects of an Embodied Conversational Agent’s Nonverbal Behavior on User’s Evaluation and Behavioral Mimicry. In: Pelachaud, C., Martin, J.-C., André, E., Chollet, G., Karpouzis, K., Pelé, D. (eds.) IVA 2007. LNCS (LNAI), vol. 4722, pp. 238–251. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  11. 11.
    Loughnan, S., Haslam, N.: Animals and androids: Implicit associations between social categories and nonhumans. Psychological Science 18, 116–121 (2007)CrossRefGoogle Scholar
  12. 12.
    Mutlu, B., Shiwa, T., Kanda, T., Ishiguro, H., Hagita, N.: Footing in human-robot conversations: how robots might shape participant roles using gaze cues. In: HRI 2009, pp. 61–68 (2009)Google Scholar
  13. 13.
    Salem, M., Kopp, S., Wachsmuth, I., Joublin, F.: Towards an integrated model of speech and gesture production for multi-modal robot behavior. In: Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication, pp. 649–654 (2010)Google Scholar
  14. 14.
    Schröder, M., Trouvain, J.: The German Text-to-Speech Synthesis System MARY: A Tool for Research. Development and Teaching 6, 365–377 (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Maha Salem
    • 1
  • Friederike Eyssel
    • 2
  • Katharina Rohlfing
    • 2
  • Stefan Kopp
    • 2
  • Frank Joublin
    • 3
  1. 1.Research Institute for Cognition and RoboticsBielefeld UniversityGermany
  2. 2.Center of Excellence Cognitive Interaction TechnologyBielefeld UniversityGermany
  3. 3.Honda Research Institute EuropeOffenbachGermany

Personalised recommendations