Abstract
In this paper, we present a novel technique to synthesize emotion expressions when an unexpected external stimulus occurs. We first construct a practical dataset of body and hand movements for every basic emotion, which contain the basic animation clips of human body, then define the temporal and spatial restrains of these clips. The clips are labelled with emotion categories and intensities, body parts and duration. Based on the dataset, the system searches over for several clips related to current emotion state and then blends them into multimodal expression sequences. Our experiment showed that multimodal sequential expressions can generate realistic responsive emotion results.
Similar content being viewed by others
References
Antonelo E, Schrauwen B, Stroobandt D (2008) Modeling multiple autonomous robot behaviors and behavior switching with a single reservoir computing network. In: IEEE international conference on systems, man and cybernetics (SMC), pp 1843–1848
Arya A, Jefferies L, Enns J, DiPaola S (2006) Facial actions as visual cues for personality. Comput Animat Virtual Worlds 17(3–4):371–382
Charlton J (2009) The determinants and expression of computer-related anger. Comput Hum Behav 25(6):1213–1221
Chittaro L, Serra M (2004) Behavioral programming of autonomous characters based on probabilistic automata and personality. Comput Animat Virtual Worlds 15(3–4):319–326
Coulson M (2004) Attributing emotion to static body postures: recognition accuracy, confusions, and viewpoint dependence. J Nonverbal Behav 28(2):117–139
De Silva P, Bianchi-Berthouze N (2004) Modeling human affective postures: an information theoretic characterization of posture features. Comput Animat Virtual Worlds 15(3–4):269–276
Deng Z, Neumann U, Lewis J, Kim T, Bulut M, Narayanan S (2006) Expressive facial animation synthesis by learning speech coarticulation and expression spaces. In: IEEE trans vis comput graph, pp 1523–1534
Garchery S, Giacomo T, Magnenat-Thalmann N (2007) Real-time adaptive facial animation. In: Data-driven 3D facial animation, pp 217–247
García-Rojas A, Gutiérrez M, Thalmann D (2008) Simulation of individual spontaneous reactive behavior. In: International foundation for autonomous agents and multiagent systems, pp 143–150
Kleinsmith A, Silva P, Bianchi-Berthouze N (2005) Grounding affective dimensions into posture features. In: Affective computing and intelligent interaction, pp 263–270
Lance B, Marsella S (2008) A model of gaze for the purpose of emotional expression in virtual embodied agents. In: International foundation for autonomous agents and multiagent systems, pp 199–206
Stone M, DeCarlo D, Oh I, Rodriguez C, Stere A, Lees A, Bregler C (2004) Speaking with hands: creating animated conversational characters from recordings of human performance. ACM Trans Graph 23(3):506–513
Tanguy E, Willis P, Bryson J (2007) Emotions as durative dynamic state for action selection. In: 20th international joint conference on artificial intelligence (IJCAI), pp 1537–1542
Yang H, Pan Z, Zhang M, Ju C (2008) Modeling emotional action for social characters. Knowl Eng Rev 23(04):321–337
Yu Q, Terzopoulos D (2007) A decision network framework for the behavioral animation of virtual humans. In: Eurographics/ACM SIGGRAPH symposium on computer animation, Eurographics association, p 128
Zammitto V, DiPaola S, Arya A (2008) A methodology for incorporating personality modeling in believable game characters. In: Arya, vol 1(613.520), p 2600
Mäkäräinen M, Takala T (2009) An approach for creating and blending synthetic facial expressions of emotion. In: Proceedings of the 9th international conference on intelligent virtual agents, pp 243–249
Vinayagamoorthy V, Gillies M, Steed A, Tanguy E, Pan X, Loscos C, Slater M (2006) Building expression into virtual characters. In: EUROGRAPHICS state of the art report, Citeseer
Niewiadomski R, Ochs M, Pelachaud C (2008) Expressions of empathy in ECAs. In: Proceedings of the 8th international conference on intelligent virtual agents, pp 37–44
Walters K, Walk R (1988) Perception of emotion from moving body cues in photographs. Bull Psychon Soc 26(2):112–114
Ekman P (1984) Expression and the nature of emotion. Approach Emot 3:19–344
Martin J, Niewiadomski R, Devillers L, Buisine S, Pelachaud C (2006) Multimodal complex emotions: gesture expressivity and blended facial expressions. Int J Humanoid Robot 3(3):269–292
Cassell J (2000) Nudge nudge wink wink: elements of face-to-face conversation for embodied conversational agents. In: Embodied conversational agents, pp 1–27
Neff M, Kipp M, Albrecht I, Seidel H (2008) Gesture modeling and animation based on a probabilistic re-creation of speaker style. ACM Trans Graph 27(1):1–24
Niewiadomski R, Hyniewska S, Pelachaud C (2009) Modeling emotional expressions as sequences of behaviours. In: Proceedings of the 9th international conference on intelligent virtual agents, pp 316–322
Xiang N, Zhao H, Zhou X, Xu M, El Rhalibi A, Wu Y (2011) UEGM: uncertain emotion generator under multi-stimulus. Comput Animat Virtual Worlds 22(3–4):141–149
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Zhang, M., Zhou, X., Xiang, N. et al. Expression sequences generator for synthetic emotion. J Multimodal User Interfaces 5, 19–25 (2012). https://doi.org/10.1007/s12193-011-0072-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12193-011-0072-6