Skip to main content
Log in

From sentence to emotion: a real-time three-dimensional graphics metaphor of emotions extracted from text

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

This paper presents a novel concept: a graphical representation of human emotion extracted from text sentences. The major contributions of this paper are the following. First, we present a pipeline that extracts, processes, and renders emotion of 3D virtual human (VH). The extraction of emotion is based on data mining statistic of large cyberspace databases. Second, we propose methods to optimize this computational pipeline so that real-time virtual reality rendering can be achieved on common PCs. Third, we use the Poisson distribution to transfer database extracted lexical and language parameters into coherent intensities of valence and arousal—parameters of Russell’s circumplex model of emotion. The last contribution is a practical color interpretation of emotion that influences the emotional aspect of rendered VHs. To test our method’s efficiency, computational statistics related to classical or untypical cases of emotion are provided. In order to evaluate our approach, we applied our method to diverse areas such as cyberspace forums, comics, and theater dialogs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Allbeck, J., Badler, N.: Toward representing agent behaviors modified by personality and emotion. In: AAMAS 2002, Workshop on Embodied Conversational Agents—Let’s Specify and Evaluate Them!, Bologna, Italy (2002)

  2. Amaya, K., Bruderlin, A., Calvert, T.: Emotion from motion. In: GI ’96, pp. 222–229. Canadian Information Processing Society, Ontario (1996)

    Google Scholar 

  3. Alm, C.O., Roth, D., Sproat, R.: Emotions from text: Machine learning for text-based emotion prediction. In: Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pp. 579–586, Vancouver, Canada, October 2005

  4. Badler, N.I., Allbeck, J., Zhao, L., Byun, M.: Representing and parameterizing agent behaviors. In: CA ’02: Proceedings of the Computer Animation, pp. 133–143. IEEE Computer Society, Washington (2002). ISBN:0-7695-1594-0

    Chapter  Google Scholar 

  5. Becheiraz, P., Thalmann, D.: A model of nonverbal communication and interpersonal relationship between virtual actors. In: CA ’96, p. 58, Washington, DC (1996)

  6. Cassell, J.: Embodied Conversational Agents. MIT Press, Cambridge (2000)

    Google Scholar 

  7. Cassell, J., Högni Vilhjálmsson, H., Bickmore, T.: BEAT: The behavior expression animation toolkit. In: SIGGRAPH 2001, Annual Conference Series, pp. 477–486. ACM Press, New York (2001)

    Google Scholar 

  8. Cassell, J., Pelachaud, C., Badler, N., Steedman, M., Achorn, B., Becket, T., Douville, B., Prevost, S., Stone, M.: Animated conversation: rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents. In: SIGGRAPH ’94, pp. 413–420. ACM, New York (1994)

    Chapter  Google Scholar 

  9. Coyne, R., Sproat, R.: Wordseye: an automatic text-to-scene conversion system. In: SIGGRAPH, pp. 487–496 (2001)

  10. Chittaro, L., Serra, M.: Behavioral programming of autonomous characters based on probabilistic automata and personality. J. Comput. Animat. Virtual Worlds 15(3–4), 319–326 (2004)

    Article  Google Scholar 

  11. Costa, M., Zhao, L., Chi, D.M., Badler, N.I.: The EMOTE Model for Effort and Shape, pp. 173–182. ACM Press, New York (2000)

    Google Scholar 

  12. Davidson, R.J., Scherer, K.R., Goldsmith, H.H.: Handbook of Affective Sciences. Oxford University Press, London (2003)

    Google Scholar 

  13. Ekman, P., Friesen, W.V.: Facial Action Coding System. Consulting Psychologists’ Press, Palo Alto (1978)

    Google Scholar 

  14. Egges, A., Molet, T., Magnenat-Thalmann, N.: Personalised real-time idle motion synthesis. In: Computer Graphics and Applications, pp. 121–130. IEEE Computer Society, Los Alamitos (2004)

    Google Scholar 

  15. Feldman-Barrett, L., Russell, J.A.: The structure of current affect. Curr. Dir. Psychol. Sci. 18(1), 10–14 (1999)

    Article  Google Scholar 

  16. Fagerberg, P., Ståhl, A., Höök, K.: Emoto: emotionally engaging interaction. Personal Ubiquitous Comput. 8(5), 377–381 (2004)

    Article  Google Scholar 

  17. Fontaine, J.R., Scherer, K.R., Roesch, E.B., Ellsworth, P.: The world of emotions is not two-dimensional. Psychol. Sci. 18(12), 1050–1057 (2007)

    Article  Google Scholar 

  18. Grillon, H., Thalmann, D.: Simulating gaze attention behaviors for crowds. Comput. Animat. Virtual Worlds 20(2–3), 111–119 (2009)

    Article  Google Scholar 

  19. Hess, U., Adams, B.R., Grammer, K., KLeck, R.E.: Face gender and emotion expression: Are angry women more like men? Vision 9(12), 1–8 (2009)

    Article  Google Scholar 

  20. Hatzivassiloglou, V., Wiebe, J.: Effects of adjective orientation and gradability on sentence subjectivity. In: Proceedings of the 18th Conference on Computational Linguistics, 2000, Saarbrücken, Germany, pp. 299–305. Association for Computational Linguistics, Morristown (2000)

    Chapter  Google Scholar 

  21. Kappas, A.: What facial activity can and cannot tell us about emotions. Behav. Process. 60, 85–98 (2003)

    Article  Google Scholar 

  22. Krumhuber, E., Manstead, A., Cosker, D., Marshall, D., Rosin, P., Kappas, A.: Facial dynamics as indicators of trustworthiness and cooperative behavior. Emotion 7(4), 730–735 (2007)

    Article  Google Scholar 

  23. Kshirsagar, S., Magnenat-Thalmann, N.: A multilayer personality model. In: SMARTGRAPH ’02: Proceedings of the 2nd International Symposium on Smart Graphics, pp. 107–115. ACM, New York (2002)

    Chapter  Google Scholar 

  24. Maïano, C., Gobron, S., Pergandi, J.-M., Therme, P., Mestre, D.: Affective and behavioral responses to a virtual fear elicitation scenario. In: VRIC’07 (Laval Virtual 07), pp. 83–91, April 18–20 2007

  25. Masuko, S., Hoshino, J.: Head-eye animation corresponding to a conversation for cg characters. In: EG’07, pp. 303–311 (2007)

  26. Macdonald, C., Ounis, I., Soboroff, I.: Overview of the trec-2008 blog track. In: TREC 2008 (2008)

  27. Manning, C.D., Schuetze, H.: Foundations of Statistical Natural Language Processing, 1st edn. MIT Press, Cambridge (1999)

    MATH  Google Scholar 

  28. Musse, S.R., Thalmann, D.: Hierarchical model for real time simulation of virtual human crowds. IEEE Trans. Vis. Comput. Graph. 7(2), 152–164 (2001)

    Article  Google Scholar 

  29. Pelachaud, C.: Modelling multimodal expression of emotion in a virtual agent. Philos. Trans. R. Soc. B, Biol. Sci. 364(1535), 3539–3548 (2009)

    Article  Google Scholar 

  30. Pelachaud, C.: Studies on gesture expressivity for a virtual agent. Speech Commun. 51(7), 630–639 (2009)

    Article  Google Scholar 

  31. Perlin, K., Goldberg, A.: Improv: a system for scripting interactive actors in virtual worlds. In: SIGGRAPH’96, pp. 205–216 (1996)

  32. Picard, R.: Affective Computing. MIT Press, Cambridge (1998)

    Google Scholar 

  33. Park, K.H., Kim, T.Y.: Facial color adaptive technique based on the theory of emotion-color association and analysis of animation. In: MMSP, pp. 861–866. IEEE Signal Processing Society (2008)

  34. Pang, B., Lee, L.: Opinion Mining and Sentiment Analysis. Now Publishers Inc., Boston (2008)

    Google Scholar 

  35. Pfeiffer, P.E., Schum, D.A.: Introduction to Applied Probability. Academic Press, New York (1973)

    MATH  Google Scholar 

  36. Russell, J.A., Bachorowski, J., Fernandez-Dols, J.M.: Facial and vocal expressions of emotion. Annu. Rev. Phychol. 54, 329–349 (2003)

    Article  Google Scholar 

  37. Ruttkay, Z., Noot, H., ten Hagen, P.: Emotion disc and emotion squares: tools to explore the facial expression space. Comput. Graph. Forum 22(1), 49–53 (2003)

    Article  Google Scholar 

  38. Russell, J.A.: A circumplex model of affect. J. Pers. Soc. Psychol. 39, 1161–1178 (1980)

    Article  Google Scholar 

  39. Stone, M., DeCarlo, D., Oh, I., Rodriguez, C., Stere, A., Lees, A., Bregler, C.: Speaking with hands. ACM Trans. Graph. 23(3), 506–513 (2004)

    Article  Google Scholar 

  40. Sebastiani, F.: Machine learning in automated text categorization. ACM Comput. Surv. 34(1), 1–47 (2002)

    Article  Google Scholar 

  41. Shao, W., Terzopoulos, D.: Autonomous pedestrians. Graph. Models 69(5–6), 246–274 (2007)

    Article  Google Scholar 

  42. Su, W.P., Pham, B., Wardhani, A.: Personality and emotion-based high-level control of affective story characters. IEEE Trans. Vis. Comput. Graph. 13(2), 281–293 (2007)

    Article  Google Scholar 

  43. Unuma, M., Anjyo, K., Takeuchi, R.: Fourier principles for emotion-based human figure animation. In: SIGGRAPH’95, pp. 91–96. ACM, New York (1995)

    Google Scholar 

  44. Velasquez, J.D.: Modeling emotions and other motivations in synthetic agents. In: Proceedings of AAAI97 (1997)

  45. Wiebe, J.: Learning subjective adjectives from corpora. In: IAAI’00, pp. 735–740. AAAI/MIT Press, Cambridge (2000)

    Google Scholar 

  46. Wilson, T., Wiebe, J., Hoffmann, P.: Recognizing contextual polarity in phrase-level sentiment analysis. In: HLT ’05, pp. 347–354. ACL, Morristown (2005)

    Chapter  Google Scholar 

  47. Yu, Q., Terzopoulos, D.: A decision network framework for the behavioral animation of virtual humans. In: SCA ’07, pp. 119–128. EG Association, Aire-la-Ville (2007)

    Google Scholar 

  48. Zhe, X., Boucouvalas, A.: Text-to-emotion engine for real time internet communication. In: International Symposium on Communication Systems, Networks and DSPs, pp. 164–168 (2002)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stephane Gobron.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Gobron, S., Ahn, J., Paltoglou, G. et al. From sentence to emotion: a real-time three-dimensional graphics metaphor of emotions extracted from text. Vis Comput 26, 505–519 (2010). https://doi.org/10.1007/s00371-010-0446-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-010-0446-x

Keywords

Navigation