Skip to main content
Log in

Expression sequences generator for synthetic emotion

  • Original Paper
  • Published:
Journal on Multimodal User Interfaces Aims and scope Submit manuscript

Abstract

In this paper, we present a novel technique to synthesize emotion expressions when an unexpected external stimulus occurs. We first construct a practical dataset of body and hand movements for every basic emotion, which contain the basic animation clips of human body, then define the temporal and spatial restrains of these clips. The clips are labelled with emotion categories and intensities, body parts and duration. Based on the dataset, the system searches over for several clips related to current emotion state and then blends them into multimodal expression sequences. Our experiment showed that multimodal sequential expressions can generate realistic responsive emotion results.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Antonelo E, Schrauwen B, Stroobandt D (2008) Modeling multiple autonomous robot behaviors and behavior switching with a single reservoir computing network. In: IEEE international conference on systems, man and cybernetics (SMC), pp 1843–1848

    Google Scholar 

  2. Arya A, Jefferies L, Enns J, DiPaola S (2006) Facial actions as visual cues for personality. Comput Animat Virtual Worlds 17(3–4):371–382

    Article  Google Scholar 

  3. Charlton J (2009) The determinants and expression of computer-related anger. Comput Hum Behav 25(6):1213–1221

    Article  Google Scholar 

  4. Chittaro L, Serra M (2004) Behavioral programming of autonomous characters based on probabilistic automata and personality. Comput Animat Virtual Worlds 15(3–4):319–326

    Article  Google Scholar 

  5. Coulson M (2004) Attributing emotion to static body postures: recognition accuracy, confusions, and viewpoint dependence. J Nonverbal Behav 28(2):117–139

    Article  MathSciNet  Google Scholar 

  6. De Silva P, Bianchi-Berthouze N (2004) Modeling human affective postures: an information theoretic characterization of posture features. Comput Animat Virtual Worlds 15(3–4):269–276

    Article  Google Scholar 

  7. Deng Z, Neumann U, Lewis J, Kim T, Bulut M, Narayanan S (2006) Expressive facial animation synthesis by learning speech coarticulation and expression spaces. In: IEEE trans vis comput graph, pp 1523–1534

    Google Scholar 

  8. Garchery S, Giacomo T, Magnenat-Thalmann N (2007) Real-time adaptive facial animation. In: Data-driven 3D facial animation, pp 217–247

    Chapter  Google Scholar 

  9. García-Rojas A, Gutiérrez M, Thalmann D (2008) Simulation of individual spontaneous reactive behavior. In: International foundation for autonomous agents and multiagent systems, pp 143–150

    Google Scholar 

  10. Kleinsmith A, Silva P, Bianchi-Berthouze N (2005) Grounding affective dimensions into posture features. In: Affective computing and intelligent interaction, pp 263–270

    Chapter  Google Scholar 

  11. Lance B, Marsella S (2008) A model of gaze for the purpose of emotional expression in virtual embodied agents. In: International foundation for autonomous agents and multiagent systems, pp 199–206

  12. Stone M, DeCarlo D, Oh I, Rodriguez C, Stere A, Lees A, Bregler C (2004) Speaking with hands: creating animated conversational characters from recordings of human performance. ACM Trans Graph 23(3):506–513

    Article  Google Scholar 

  13. Tanguy E, Willis P, Bryson J (2007) Emotions as durative dynamic state for action selection. In: 20th international joint conference on artificial intelligence (IJCAI), pp 1537–1542

    Google Scholar 

  14. Yang H, Pan Z, Zhang M, Ju C (2008) Modeling emotional action for social characters. Knowl Eng Rev 23(04):321–337

    Article  Google Scholar 

  15. Yu Q, Terzopoulos D (2007) A decision network framework for the behavioral animation of virtual humans. In: Eurographics/ACM SIGGRAPH symposium on computer animation, Eurographics association, p 128

    Google Scholar 

  16. Zammitto V, DiPaola S, Arya A (2008) A methodology for incorporating personality modeling in believable game characters. In: Arya, vol 1(613.520), p 2600

    Google Scholar 

  17. Mäkäräinen M, Takala T (2009) An approach for creating and blending synthetic facial expressions of emotion. In: Proceedings of the 9th international conference on intelligent virtual agents, pp 243–249

    Google Scholar 

  18. Vinayagamoorthy V, Gillies M, Steed A, Tanguy E, Pan X, Loscos C, Slater M (2006) Building expression into virtual characters. In: EUROGRAPHICS state of the art report, Citeseer

    Google Scholar 

  19. Niewiadomski R, Ochs M, Pelachaud C (2008) Expressions of empathy in ECAs. In: Proceedings of the 8th international conference on intelligent virtual agents, pp 37–44

    Google Scholar 

  20. Walters K, Walk R (1988) Perception of emotion from moving body cues in photographs. Bull Psychon Soc 26(2):112–114

    Google Scholar 

  21. Ekman P (1984) Expression and the nature of emotion. Approach Emot 3:19–344

    Google Scholar 

  22. Martin J, Niewiadomski R, Devillers L, Buisine S, Pelachaud C (2006) Multimodal complex emotions: gesture expressivity and blended facial expressions. Int J Humanoid Robot 3(3):269–292

    Article  Google Scholar 

  23. Cassell J (2000) Nudge nudge wink wink: elements of face-to-face conversation for embodied conversational agents. In: Embodied conversational agents, pp 1–27

    Google Scholar 

  24. Neff M, Kipp M, Albrecht I, Seidel H (2008) Gesture modeling and animation based on a probabilistic re-creation of speaker style. ACM Trans Graph 27(1):1–24

    Article  Google Scholar 

  25. Niewiadomski R, Hyniewska S, Pelachaud C (2009) Modeling emotional expressions as sequences of behaviours. In: Proceedings of the 9th international conference on intelligent virtual agents, pp 316–322

    Google Scholar 

  26. Xiang N, Zhao H, Zhou X, Xu M, El Rhalibi A, Wu Y (2011) UEGM: uncertain emotion generator under multi-stimulus. Comput Animat Virtual Worlds 22(3–4):141–149

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhigeng Pan.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Zhang, M., Zhou, X., Xiang, N. et al. Expression sequences generator for synthetic emotion. J Multimodal User Interfaces 5, 19–25 (2012). https://doi.org/10.1007/s12193-011-0072-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12193-011-0072-6

Keywords

Navigation