Advertisement

NovA: Automated Analysis of Nonverbal Signals in Social Interactions

  • Tobias Baur
  • Ionut Damian
  • Florian Lingenfelser
  • Johannes Wagner
  • Elisabeth André
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8212)

Abstract

Previous studies have shown that the success of interpersonal interaction depends not only on the contents we communicate explicitly, but also on the social signals that are conveyed implicitly. In this paper, we present NovA (NOnVerbal behavior Analyzer), a system that analyzes and facilitates the interpretation of social signals conveyed by gestures, facial expressions and others automatically as a basis for computer-enhanced social coaching. NovA records data of human interactions, automatically detects relevant behavioral cues as a measurement for the quality of an interaction and creates descriptive statistics for the recorded data. This enables us to give a user online generated feedback on strengths and weaknesses concerning his social behavior, as well as elaborate tools for offline analysis and annotation.

Keywords

Facial Expression Bayesian Network Emotion Recognition Multimodal Interface Voice Activity Detection 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Mehrabian, A.: Silent messages: Implicit Communication of Emotions and Attitudes. Wadsworth Publishing Co Inc., Belmont (1981)Google Scholar
  2. 2.
    Eagle, N., Pentland, A.: Reality mining: Sensing complex social signales. J. of Personal and Ubiquitous Computing 10(4), 255–268 (2006)CrossRefGoogle Scholar
  3. 3.
    Wittenburg, P., Brugman, H., Russel, A., Klassmann, A., Sloetjes, H.: Elan: A professional framework for multimodality research. In: Proc. of the Fifth International Conference on Language Resources and Evaluation (LREC), pp. 879–896 (2006)Google Scholar
  4. 4.
    Kipp, M.: Anvil: The video annotation research tool. In: Handbook of Corpus Phonology. Oxford University Press, Oxford (2013)Google Scholar
  5. 5.
    Schmidt, T.: Transcribing and annotating spoken language with exmaralda. In: Proc. of the LREC-Workshop on XML Based Richly Annotated Corpora, Lisbon 2004, pp. 879–896. ELRA, Paris (2004)Google Scholar
  6. 6.
    Curhan, J., Pentland, A.: Thin slices of negotiation: predicting outcomes from conversational dynamics withing the first 5 minutes. J. of Applied Psychology 92(3), 802–811 (2007)CrossRefGoogle Scholar
  7. 7.
    Pentland, A.: Automatic mapping and modelling of human networks. Physica A(378), 59–67 (2007)Google Scholar
  8. 8.
    Schuller, B., Müller, R., Eyben, F., Gast, J., Hörnler, B., Wöllmer, M., Rigoll, G., Höthker, A., Konosu, H.: Being bored? recognising natural interest by extensive audiovisual integration for real-life application. Image Vision Comput. 27(12), 1760–1774 (2009)CrossRefGoogle Scholar
  9. 9.
    Rich, C., Ponsleur, B., Holroyd, A., Sidner, C.L.: Recognizing engagement in human-robot interaction. In: Proc. of the 5th ACM/IEEE Intl. Conf. on Human-Robot Interaction, HRI 2010, pp. 375–382. IEEE Press, Piscataway (2010)Google Scholar
  10. 10.
    Zeng, Z., Pantic, M., Roisman, G.I., Huang, T.S.: A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Trans. Pattern Anal. Mach. Intell. 31(1), 39–58 (2009)CrossRefGoogle Scholar
  11. 11.
    Pianesi, F., Mana, N., Cappelletti, A., Lepri, B., Zancanaro, M.: Multimodal recognition of personality traits in social interactions. In: Proc. of the 10th International Conference on Multimodal Interfaces, ICMI 2008, pp. 53–60. ACM, NY (2008)Google Scholar
  12. 12.
    Dong, W., Lepri, B., Cappelletti, A., Pentland, A.S., Pianesi, F., Zancanaro, M.: Using the influence model to recognize functional roles in meetings. In: Proc. of the 9th International Conference on Multimodal Interfaces, ICMI 2007, pp. 271–278. ACM, New York (2007)Google Scholar
  13. 13.
    Hung, H., Gatica-Perez, D.: Estimating cohesion in small groups using audio-visual nonverbal behavior. Trans. Multi. 12(6), 563–575 (2010)CrossRefGoogle Scholar
  14. 14.
    Sandbach, G., Zafeiriou, S., Pantic, M., Yin, L.: Static and dynamic 3d facial expression recognition: A comprehensive survey. Image Vision Comput. 30(10), 683–697 (2012)CrossRefGoogle Scholar
  15. 15.
    Caridakis, G., Raouzaiou, A., Karpouzis, K., Kollias, S.: Synthesizing gesture expressivity based on real sequences. In: Workshop on Multimodal Corpora: from Multimodal Behaviour Theories to Usable Models. LREC, Genoa (2006)Google Scholar
  16. 16.
    Vogt, T., André, E., Bee, N.: Emovoice - a framework for online recognition of emotions from voice. In: André, E., Dybkjær, L., Minker, W., Neumann, H., Pieraccini, R., Weber, M. (eds.) PIT 2008. LNCS (LNAI), vol. 5078, pp. 188–199. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  17. 17.
    Kleinsmith, A., Bianchi-Berthouze, N.: Form as a cue in the automatic recognition of non-acted affective body expressions. In: D’Mello, S., Graesser, A., Schuller, B., Martin, J.-C. (eds.) ACII 2011, Part I. LNCS, vol. 6974, pp. 155–164. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  18. 18.
    Kim, J., André, E.: Emotion recognition based on physiological changes in music listening. IEEE Trans. Pattern Anal. Mach. Intell. 30(12), 2067–2083 (2008)CrossRefGoogle Scholar
  19. 19.
    Camurri, A., Volpe, G., De Poli, G., Leman, M.: Communicating expressiveness and affect in multimodal interactive systems. IEEE MultiMedia 12(1) (2005)Google Scholar
  20. 20.
    Scherer, S., Marsella, S., Stratou, G., Xu, Y., Morbini, F., Egan, A., Rizzo, A(S.), Morency, L.-P.: Perception markup language: Towards a standardized representation of perceived nonverbal behaviors. In: Nakano, Y., Neff, M., Paiva, A., Walker, M. (eds.) IVA 2012. LNCS, vol. 7502, pp. 455–463. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  21. 21.
    Simpson, J.A., Harris, B.A.: Interpersonal attraction. In: Weber, A.L., Harvey, J.H. (eds.) Perspectives on Close Relationships, pp. 45–66. Prentice Hall (1994)Google Scholar
  22. 22.
    McGinley, H., LeFevre, R., McGinley, P.: The influence of a communicator’s body position on opinion. J. of Personality and Social Psychology 31(4), 686–690 (1975)CrossRefGoogle Scholar
  23. 23.
    Schouwstra, S., Hoogstraten, J.: Head position and spinal position as determinants of perceived emotional state. Perceptual and Motor Skills 81, 673–674 (1995)CrossRefGoogle Scholar
  24. 24.
    Sidner, C.L., Kidd, C.D., Lee, C., Lesh, N.: Where to look: a study of human-robot engagement. In: IUI 2004: Proc. of the 9th International Conference on Intelligent User Interfaces, pp. 78–84. ACM Press, New York (2004)Google Scholar
  25. 25.
    Pease, A.: Body Language. Sheldon Press, London (1988)Google Scholar
  26. 26.
    Bandura, A.: Self Efficacy: The Exercise of Control. Palgrave Macmillan, New York (1997)Google Scholar
  27. 27.
    Forgas, J.P., Cooper, J., Crano, W.D.: The Psychology of Attitudes and Attitude Change. Taylor & Francis Group, New York (2010)Google Scholar
  28. 28.
    Wagner, J., Lingenfelser, F., Baur, T., Damian, I., Kistler, F., André, E.: The social signal interpretation (ssi) framework - multimodal signal processing and recognition in real-time. In: Proceedings of the 21st ACM International Conference on Multimedia, Barcelona, Spain (2013)Google Scholar
  29. 29.
    Kistler, F., Endrass, B., Damian, I., Dang, C., André, E.: Natural interaction with culturally adaptive virtual characters. Germany Journal on Multimodal User Interfaces Heidelberg/Berlin (2012)Google Scholar
  30. 30.
    Wallbott, H.: Bodily expression of emotion. European Jrl. of Social Psychology (28), 879–896 (1998)Google Scholar
  31. 31.
    Ruf, T., Ernst, A., Küblbeck, C.: Face detection with the sophisticated high-speed object recognition engine (shore). In: Microelectronic Systems, pp. 243–252. Springer (2011)Google Scholar
  32. 32.
    Hammer, T.: Mental health and social exclusion among unemployed youth in scandinavia. a comparative study. Intl. Jrnl. of Social Welfare 9(1), 53–63 (2000)CrossRefGoogle Scholar
  33. 33.
    Pan, X., Gillies, M., Barker, C., Clark, D.M., Slater, M.: Socially anxious and confident men interact with a forward virtual woman: An experiment study. PLoS ONE 7(4) (2012) e32931Google Scholar
  34. 34.
    Damian, I., Baur, T., André, E.: Investigating social cue-based interaction in digital learning games. In: Proc. of the 8th International Conference on the Foundations of Digital Games, SASDG (2013)Google Scholar
  35. 35.
    Porayska-Pomsta, K., Anderson, K., Damian, I., Baur, T., André, E., Bernardini, S., Rizzo, P.: Modelling users’ affect in job interviews: Technological demo. In: Carberry, S., Weibelzahl, S., Micarelli, A., Semeraro, G. (eds.) UMAP 2013. LNCS, vol. 7899, pp. 353–355. Springer, Heidelberg (2013)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2013

Authors and Affiliations

  • Tobias Baur
    • 1
  • Ionut Damian
    • 1
  • Florian Lingenfelser
    • 1
  • Johannes Wagner
    • 1
  • Elisabeth André
    • 1
  1. 1.Human Centered MultimediaAugsburg UniversityAugsburgGermany

Personalised recommendations