The Visual Computer

, Volume 26, Issue 6–8, pp 505–519 | Cite as

From sentence to emotion: a real-time three-dimensional graphics metaphor of emotions extracted from text

  • Stephane GobronEmail author
  • Junghyun Ahn
  • Georgios Paltoglou
  • Michael Thelwall
  • Daniel Thalmann
Original Article


This paper presents a novel concept: a graphical representation of human emotion extracted from text sentences. The major contributions of this paper are the following. First, we present a pipeline that extracts, processes, and renders emotion of 3D virtual human (VH). The extraction of emotion is based on data mining statistic of large cyberspace databases. Second, we propose methods to optimize this computational pipeline so that real-time virtual reality rendering can be achieved on common PCs. Third, we use the Poisson distribution to transfer database extracted lexical and language parameters into coherent intensities of valence and arousal—parameters of Russell’s circumplex model of emotion. The last contribution is a practical color interpretation of emotion that influences the emotional aspect of rendered VHs. To test our method’s efficiency, computational statistics related to classical or untypical cases of emotion are provided. In order to evaluate our approach, we applied our method to diverse areas such as cyberspace forums, comics, and theater dialogs.


Virtual reality Distribution functions Data mining Text analysis Psychology and sociology Facial animation 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Allbeck, J., Badler, N.: Toward representing agent behaviors modified by personality and emotion. In: AAMAS 2002, Workshop on Embodied Conversational Agents—Let’s Specify and Evaluate Them!, Bologna, Italy (2002) Google Scholar
  2. 2.
    Amaya, K., Bruderlin, A., Calvert, T.: Emotion from motion. In: GI ’96, pp. 222–229. Canadian Information Processing Society, Ontario (1996) Google Scholar
  3. 3.
    Alm, C.O., Roth, D., Sproat, R.: Emotions from text: Machine learning for text-based emotion prediction. In: Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pp. 579–586, Vancouver, Canada, October 2005 Google Scholar
  4. 4.
    Badler, N.I., Allbeck, J., Zhao, L., Byun, M.: Representing and parameterizing agent behaviors. In: CA ’02: Proceedings of the Computer Animation, pp. 133–143. IEEE Computer Society, Washington (2002). ISBN:0-7695-1594-0 CrossRefGoogle Scholar
  5. 5.
    Becheiraz, P., Thalmann, D.: A model of nonverbal communication and interpersonal relationship between virtual actors. In: CA ’96, p. 58, Washington, DC (1996) Google Scholar
  6. 6.
    Cassell, J.: Embodied Conversational Agents. MIT Press, Cambridge (2000) Google Scholar
  7. 7.
    Cassell, J., Högni Vilhjálmsson, H., Bickmore, T.: BEAT: The behavior expression animation toolkit. In: SIGGRAPH 2001, Annual Conference Series, pp. 477–486. ACM Press, New York (2001) Google Scholar
  8. 8.
    Cassell, J., Pelachaud, C., Badler, N., Steedman, M., Achorn, B., Becket, T., Douville, B., Prevost, S., Stone, M.: Animated conversation: rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents. In: SIGGRAPH ’94, pp. 413–420. ACM, New York (1994) CrossRefGoogle Scholar
  9. 9.
    Coyne, R., Sproat, R.: Wordseye: an automatic text-to-scene conversion system. In: SIGGRAPH, pp. 487–496 (2001) Google Scholar
  10. 10.
    Chittaro, L., Serra, M.: Behavioral programming of autonomous characters based on probabilistic automata and personality. J. Comput. Animat. Virtual Worlds 15(3–4), 319–326 (2004) CrossRefGoogle Scholar
  11. 11.
    Costa, M., Zhao, L., Chi, D.M., Badler, N.I.: The EMOTE Model for Effort and Shape, pp. 173–182. ACM Press, New York (2000) Google Scholar
  12. 12.
    Davidson, R.J., Scherer, K.R., Goldsmith, H.H.: Handbook of Affective Sciences. Oxford University Press, London (2003) Google Scholar
  13. 13.
    Ekman, P., Friesen, W.V.: Facial Action Coding System. Consulting Psychologists’ Press, Palo Alto (1978) Google Scholar
  14. 14.
    Egges, A., Molet, T., Magnenat-Thalmann, N.: Personalised real-time idle motion synthesis. In: Computer Graphics and Applications, pp. 121–130. IEEE Computer Society, Los Alamitos (2004) Google Scholar
  15. 15.
    Feldman-Barrett, L., Russell, J.A.: The structure of current affect. Curr. Dir. Psychol. Sci. 18(1), 10–14 (1999) CrossRefGoogle Scholar
  16. 16.
    Fagerberg, P., Ståhl, A., Höök, K.: Emoto: emotionally engaging interaction. Personal Ubiquitous Comput. 8(5), 377–381 (2004) CrossRefGoogle Scholar
  17. 17.
    Fontaine, J.R., Scherer, K.R., Roesch, E.B., Ellsworth, P.: The world of emotions is not two-dimensional. Psychol. Sci. 18(12), 1050–1057 (2007) CrossRefGoogle Scholar
  18. 18.
    Grillon, H., Thalmann, D.: Simulating gaze attention behaviors for crowds. Comput. Animat. Virtual Worlds 20(2–3), 111–119 (2009) CrossRefGoogle Scholar
  19. 19.
    Hess, U., Adams, B.R., Grammer, K., KLeck, R.E.: Face gender and emotion expression: Are angry women more like men? Vision 9(12), 1–8 (2009) CrossRefGoogle Scholar
  20. 20.
    Hatzivassiloglou, V., Wiebe, J.: Effects of adjective orientation and gradability on sentence subjectivity. In: Proceedings of the 18th Conference on Computational Linguistics, 2000, Saarbrücken, Germany, pp. 299–305. Association for Computational Linguistics, Morristown (2000) CrossRefGoogle Scholar
  21. 21.
    Kappas, A.: What facial activity can and cannot tell us about emotions. Behav. Process. 60, 85–98 (2003) CrossRefGoogle Scholar
  22. 22.
    Krumhuber, E., Manstead, A., Cosker, D., Marshall, D., Rosin, P., Kappas, A.: Facial dynamics as indicators of trustworthiness and cooperative behavior. Emotion 7(4), 730–735 (2007) CrossRefGoogle Scholar
  23. 23.
    Kshirsagar, S., Magnenat-Thalmann, N.: A multilayer personality model. In: SMARTGRAPH ’02: Proceedings of the 2nd International Symposium on Smart Graphics, pp. 107–115. ACM, New York (2002) CrossRefGoogle Scholar
  24. 24.
    Maïano, C., Gobron, S., Pergandi, J.-M., Therme, P., Mestre, D.: Affective and behavioral responses to a virtual fear elicitation scenario. In: VRIC’07 (Laval Virtual 07), pp. 83–91, April 18–20 2007 Google Scholar
  25. 25.
    Masuko, S., Hoshino, J.: Head-eye animation corresponding to a conversation for cg characters. In: EG’07, pp. 303–311 (2007) Google Scholar
  26. 26.
    Macdonald, C., Ounis, I., Soboroff, I.: Overview of the trec-2008 blog track. In: TREC 2008 (2008) Google Scholar
  27. 27.
    Manning, C.D., Schuetze, H.: Foundations of Statistical Natural Language Processing, 1st edn. MIT Press, Cambridge (1999) zbMATHGoogle Scholar
  28. 28.
    Musse, S.R., Thalmann, D.: Hierarchical model for real time simulation of virtual human crowds. IEEE Trans. Vis. Comput. Graph. 7(2), 152–164 (2001) CrossRefGoogle Scholar
  29. 29.
    Pelachaud, C.: Modelling multimodal expression of emotion in a virtual agent. Philos. Trans. R. Soc. B, Biol. Sci. 364(1535), 3539–3548 (2009) CrossRefGoogle Scholar
  30. 30.
    Pelachaud, C.: Studies on gesture expressivity for a virtual agent. Speech Commun. 51(7), 630–639 (2009) CrossRefGoogle Scholar
  31. 31.
    Perlin, K., Goldberg, A.: Improv: a system for scripting interactive actors in virtual worlds. In: SIGGRAPH’96, pp. 205–216 (1996) Google Scholar
  32. 32.
    Picard, R.: Affective Computing. MIT Press, Cambridge (1998) Google Scholar
  33. 33.
    Park, K.H., Kim, T.Y.: Facial color adaptive technique based on the theory of emotion-color association and analysis of animation. In: MMSP, pp. 861–866. IEEE Signal Processing Society (2008) Google Scholar
  34. 34.
    Pang, B., Lee, L.: Opinion Mining and Sentiment Analysis. Now Publishers Inc., Boston (2008) Google Scholar
  35. 35.
    Pfeiffer, P.E., Schum, D.A.: Introduction to Applied Probability. Academic Press, New York (1973) zbMATHGoogle Scholar
  36. 36.
    Russell, J.A., Bachorowski, J., Fernandez-Dols, J.M.: Facial and vocal expressions of emotion. Annu. Rev. Phychol. 54, 329–349 (2003) CrossRefGoogle Scholar
  37. 37.
    Ruttkay, Z., Noot, H., ten Hagen, P.: Emotion disc and emotion squares: tools to explore the facial expression space. Comput. Graph. Forum 22(1), 49–53 (2003) CrossRefGoogle Scholar
  38. 38.
    Russell, J.A.: A circumplex model of affect. J. Pers. Soc. Psychol. 39, 1161–1178 (1980) CrossRefGoogle Scholar
  39. 39.
    Stone, M., DeCarlo, D., Oh, I., Rodriguez, C., Stere, A., Lees, A., Bregler, C.: Speaking with hands. ACM Trans. Graph. 23(3), 506–513 (2004) CrossRefGoogle Scholar
  40. 40.
    Sebastiani, F.: Machine learning in automated text categorization. ACM Comput. Surv. 34(1), 1–47 (2002) CrossRefGoogle Scholar
  41. 41.
    Shao, W., Terzopoulos, D.: Autonomous pedestrians. Graph. Models 69(5–6), 246–274 (2007) CrossRefGoogle Scholar
  42. 42.
    Su, W.P., Pham, B., Wardhani, A.: Personality and emotion-based high-level control of affective story characters. IEEE Trans. Vis. Comput. Graph. 13(2), 281–293 (2007) CrossRefGoogle Scholar
  43. 43.
    Unuma, M., Anjyo, K., Takeuchi, R.: Fourier principles for emotion-based human figure animation. In: SIGGRAPH’95, pp. 91–96. ACM, New York (1995) Google Scholar
  44. 44.
    Velasquez, J.D.: Modeling emotions and other motivations in synthetic agents. In: Proceedings of AAAI97 (1997) Google Scholar
  45. 45.
    Wiebe, J.: Learning subjective adjectives from corpora. In: IAAI’00, pp. 735–740. AAAI/MIT Press, Cambridge (2000) Google Scholar
  46. 46.
    Wilson, T., Wiebe, J., Hoffmann, P.: Recognizing contextual polarity in phrase-level sentiment analysis. In: HLT ’05, pp. 347–354. ACL, Morristown (2005) CrossRefGoogle Scholar
  47. 47.
    Yu, Q., Terzopoulos, D.: A decision network framework for the behavioral animation of virtual humans. In: SCA ’07, pp. 119–128. EG Association, Aire-la-Ville (2007) Google Scholar
  48. 48.
    Zhe, X., Boucouvalas, A.: Text-to-emotion engine for real time internet communication. In: International Symposium on Communication Systems, Networks and DSPs, pp. 164–168 (2002) Google Scholar

Copyright information

© Springer-Verlag 2010

Authors and Affiliations

  • Stephane Gobron
    • 1
    Email author
  • Junghyun Ahn
    • 1
  • Georgios Paltoglou
    • 1
  • Michael Thelwall
    • 1
  • Daniel Thalmann
    • 1
  1. 1.EPFL, IC ISIM VRLABLausanneSwitzerland

Personalised recommendations