Pragmatic Multimodality: Effects of Nonverbal Cues of Focus and Certainty in a Virtual Human

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10498)


In pragmatic multimodality, modal (pragmatic) information is conveyed multimodally by cues in gesture, facial expressions, head movements and prosody. We observed these cues in natural interaction data. They can convey positive and negative focus, in that they emphasise or de-emphasise a piece of information, and they can convey uncertainty. In this work, we test the effects on perception and recall in a human user, when those cues are carried out by a virtual human. The nonverbal behaviour of the virtual human was modelled using motion capture data and ensured a fully multimodal appearance. Results of the study show that the virtual human was perceived as very competent and as saying something important. A special case of de-emphasising cues led to lower content recall.


Pragmatic modification Utterance marking Linguistic modals Virtual human Motion capture Perception study 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Wharton, T.: Pragmatics and non-verbal communication. Cambridge University Press (2009)Google Scholar
  2. 2.
    Smith, M.: Pragmatic functions and lexical categories. Linguistics 48(3), 717–777 (2010)CrossRefGoogle Scholar
  3. 3.
    Kendon, A.: Gesture: Visible Action as Utterance. Cambridge Uni. Press (2004)Google Scholar
  4. 4.
    Kendon, A.: Gestures as illocutionary and discourse structure markers in Southern Italian conversation. Journal of Pragmatics 23(3), 247–279 (1995)CrossRefGoogle Scholar
  5. 5.
    Norrick, N.R.: Discussion article. Catalan. Journal of Linguistics 6, 159–168 (2007)Google Scholar
  6. 6.
    Ward, N.: Pragmatic functions of prosodic features in non-lexical utterances. In: Speech Prosody 2004 International Conference (2004)Google Scholar
  7. 7.
    Payrató, L., Teßendorf, S.: Pragmatic gestures. In: Body Language Communication: An International Handbook on Multimodality in Human Interaction. Handbooks of Linguistics and Communication Science, vol. 38, no. 1, pp. 1531–1539 (2013)Google Scholar
  8. 8.
    Bressem, J., Müller, C.: The family of away gestures: negation, refusal, and negative assessment. In: Body–Language–Communication, pp. 1592–1604 (2014)Google Scholar
  9. 9.
    Freigang, F., Kopp, S.: Analysing the modifying functions of gesture in multimodal utterances. In: Proc. of the 4th Conference on Gesture and Speech in Interaction (GESPIN), Nantes, France (2015)Google Scholar
  10. 10.
    Freigang, F., Kopp, S.: Modal Pragmatic Functions of Gesture - Exploring the Dimensions of Function and Form (in prep.)Google Scholar
  11. 11.
    Hunyadi, L., et al.: Annotation of spoken syntax in relation to prosody and multimodal pragmatics. In: 3rd International Conference on IEEE on Cognitive Infocommunications (CogInfoCom) (2012)Google Scholar
  12. 12.
    Freigang, F., Kopp, S.: This is what’s important-using speech and gesture to create focus in multimodal utterance. In: International Conference on Intelligent Virtual Agents (IVA). Springer International Publishing (2016)Google Scholar
  13. 13.
    Halliday, M.A.K.: Intonation and Grammar in British English, Mouton, The Hague (1967)Google Scholar
  14. 14.
    Allwood, J.: Bodily communication dimensions of expression and content. In: Multimodality in Language and Speech Systems, pp. 7–26. Springer, Netherlands (2002)Google Scholar
  15. 15.
    Allwood, J.: Linguistic communication as action and cooperation, University of Göteborg, Department of Linguistics (1976)Google Scholar
  16. 16.
    McNeill, D.: Hand and mind: What gestures reveal about thought. University of Chicago press (1992)Google Scholar
  17. 17.
    McNeill, D.: Pointing and morality in Chicago, Pointing: Where language, culture, and cognition meet, pp. 293–306 (2003)Google Scholar
  18. 18.
    McNeill, D., Cassell, J., Levy, E.T.: Abstract deixis. Semiotica 95(1–2), 5–20 (1993)Google Scholar
  19. 19.
    Teßendorf, S.: Pragmatic and metaphoric-combining functional with cognitive approaches in the analysis of the ‘brushing aside gesture’. In: Body–Language–Communication: An International Handbook on Multimodality in Human Interaction, pp. 1540–1558 (2014)Google Scholar
  20. 20.
    Müller, C.: Forms and uses of the Palm Up Open Hand: A case of a gesture family. The Semantics and Pragmatics of Everyday Gestures 9, 233–256 (2004)Google Scholar
  21. 21.
    Kranstedt, A., Kopp, S., Wachsmuth., I.: MURML: a multimodal utterance representation markup language for conversational agents. In: Proceedings of the AAMAS 2002 Workshop on Embodied Conversational Agents (2002)Google Scholar
  22. 22.
    van Welbergen, H., Yaghoubzadeh, R., Kopp, S.: AsapRealizer 2.0: the next steps in fluent behavior realization for ECAs. In: Bickmore, T., Marsella, S., Sidner, C. (eds.) IVA 2014. LNCS, vol. 8637, pp. 449–462. Springer, Cham (2014). doi: 10.1007/978-3-319-09767-1_56CrossRefGoogle Scholar
  23. 23.
    De Rosis, F., et al.: From Greta’s mind to her face: modelling the dynamics of affective states in a conversational embodied agent. International Journal of Human-Computer Studies 59(1), 81–118 (2003)CrossRefGoogle Scholar
  24. 24.
    Lee, J., Marsella, S.: Nonverbal behavior generator for embodied conversational agents. In: Gratch, J., Young, M., Aylett, R., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS, vol. 4133, pp. 243–255. Springer, Heidelberg (2006). doi: 10.1007/11821830_20CrossRefGoogle Scholar
  25. 25.
    Liu, C., et al.: Generation of nodding, head tilting and eye gazing for human-robot dialogue interaction. In: 7th ACM/IEEE International Conference on IEEE Human-Robot Interaction (HRI) (2012)Google Scholar
  26. 26.
    Leiner, D.J.: SoSci Survey (Version 2.6.00-i) [Computer software] (2014).
  27. 27.
    Bergmann, K., Macedonia, M.: A virtual agent as vocabulary trainer: iconic gestures help to improve learners’ memory performance. In: Aylett, R., Krenn, B., Pelachaud, C., Shimodaira, H. (eds.) IVA 2013. LNCS, vol. 8108, pp. 139–148. Springer, Heidelberg (2013). doi: 10.1007/978-3-642-40415-3_12CrossRefGoogle Scholar
  28. 28.
    Bergmann, K., Kopp, S., Eyssel, F.: Individualized gesturing outperforms average gesturing – evaluating gesture production in virtual humans. In: Allbeck, J., Badler, N., Bickmore, T., Pelachaud, C., Safonova, A. (eds.) IVA 2010. LNCS, vol. 6356, pp. 104–117. Springer, Heidelberg (2010). doi: 10.1007/978-3-642-15892-6_11CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Social Cognitive Systems Group, Faculty of Technology, Center of Excellence “Cognitive Interaction Technology” (CITEC)Bielefeld UniversityBielefeldGermany

Personalised recommendations