Ekfrasis: A Formal Language for Representing and Generating Sequences of Facial Patterns for Studying Emotional Behavior

  • Nikolaos Bourbakis
  • Anna Esposito
  • Despina Kavraki
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5042)

Abstract

Emotion is a topic that has received much attention during the last few years, both in the context of speech synthesis, image understanding as well as in automatic speech recognition, interactive dialogues systems and wearable computing. This paper presents a formal model of a language (called Ekfrasis) as a software methodology that synthesizes (or generates) automatically various facial expressions by appropriately combining facial features. The main objective here is to use this methodology to generate various combinations of facial expressions and study if these combinations efficiently represent emotional behavioral patterns.

Keywords

Model of emotional expressions formal language facial features 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bourbakis, N.: A Neural-based KB using SPNGs in Sequence of Images. AIRFORCE & SUNY-B-TR-1991, 1–45 (November 1991)Google Scholar
  2. 2.
    Bourbakis, N., Gattiker, J.: Representation of Structural and Functional Knowledge using SPN Graphs. In: Proc. IEEE Conf. on SEKE (1995)Google Scholar
  3. 3.
    Bourbakis, N.: Emulating Human Visual Perception for Measuring Differences in Images Using an SPN Graph Approach. IEEE T-SMC 32(2), 191–201 (2002)Google Scholar
  4. 4.
    Hilton, A., Illingworth, J., Li, Y.: A Relaxation Algorithm for Real-time Multiple View 3D-Tracking. Image and Vision Computing 20(12), 841–859 (2002)CrossRefGoogle Scholar
  5. 5.
    Bourbakis, N.G., Moghaddamzadeh, A.: A Fuzzy Region Growing Approach for Segmentation of Color Images. PR Society Journal of Pattern Recognition 30(6), 867–881 (1997)CrossRefGoogle Scholar
  6. 6.
    Ahuja, N., An, B., Schachter, B.: Image Representation Using Voronoi Tessellation. Computer Vision, Graphics and Image Processing 29, 286–295 (1985)CrossRefGoogle Scholar
  7. 7.
    Ahuja, N.: Dot Pattern Processing Using Voronoi Neighborhoods. IEEE Trans. on Pattern Recognition and Machine Intelligence 4(3), 336–342 (1982)CrossRefGoogle Scholar
  8. 8.
    Fu, K.S., Sanfeliu, A.: A Distance Measure Between Attributed Relational Graphs For Pattern Recognition. IEEE Trans. On Systems, Man, and Cybernetics 13(3), 353–362 (1983)MATHGoogle Scholar
  9. 9.
    Kubicka, E., Kubicki, G., Vakalis, I.: Using Graph Distance in Object Recognition. In: ACM 18th Annual Computer Science Conference Proc., pp. 43–48. ACM, New York (1990)Google Scholar
  10. 10.
    Bourbakis, N.: A Rule-Based Scheme for Synthesis of Texture Images. In: Int. IEEE Conf. on Systems, Man and Cybernetics, Fairfax, VA, pp. 999–1003 (1987)Google Scholar
  11. 11.
    Currey, K.M., Jason, T.L., Shapiro, D.S., Wang, B.A., Zhang, K.: An Algorithm for Finding the Largest Approximately Common Substructures of Two Trees. IEEE T-PAMI 20(8), 889–895 (1998)CrossRefGoogle Scholar
  12. 12.
    Amit, Y., Kong, A.: Graphical Templates for Model Registration. IEEE Trans. on Pattern Recognition and Machine Intelligence 18(3), 225–236 (1996)CrossRefGoogle Scholar
  13. 13.
    Cross, A.D.J., Hancock, E.R.: Graph Matching with a Dual-Step EM Algorithm. IEEE T-PAMI 20(11), 1236–1253 (1998)CrossRefGoogle Scholar
  14. 14.
    Bebis, G., Bourbakis, N., Gattiker, J.: Representing and Interpreting Human Activity and Events from Video. IJAIT 12(1) (2003)Google Scholar
  15. 15.
    Hu, M.K.: Visual Pattern Recognition by Moments Invariants. IRE Transaction of Information Theory 8, 179–187 (1962)MATHGoogle Scholar
  16. 16.
    Murata, T.: Petri Nets: properties, Analysis and Applications. Proc. of the IEEE 7(4) (1989)Google Scholar
  17. 17.
    Bourbakis, N.: Motion Analysis of Multiple Humans for Guiding Visual Impaired. ITR-TR (2005)Google Scholar
  18. 18.
    Doyle, P.: When is a Communicative Agent a Good Idea? In: Proc. of Inter. Workshop on Communicative and Autonomous Agents, Seattle (1999)Google Scholar
  19. 19.
    Ekman, P.: Facial Expression of Emotion: New Findings. New Questions. Psychol. Science 3, 34–38 (1992)CrossRefGoogle Scholar
  20. 20.
    Esposito, A., Garcia, O.N., Gutierrez-Osuna, R., Kakumanu, P.: Optimal Data Encoding for Speech Driven Facial Animation. Tech. Rep. N. CS-WSU-04-02, Wright State University, Dayton, Ohio, USA (2003)Google Scholar
  21. 21.
    Ezzat, T., Geiger, G., Poggio, T.: Trainable Video Realistic Speech Animation. In: Proc. of SIGGRAPH, San Antonio, TX, pp. 388–397 (2002)Google Scholar
  22. 22.
    Bojorquez, A., Castello, J., Esposito, A., Garcia, O.N., Kakumanu, P., Gutierrez-Osuna, R., Rudomin, I.: Speech-Driven facial Animation with Realistic Dynamic. Tech. Rep. N. CS-WSU-03-02, Wright State University, Dayton, Ohio, USA (2002)Google Scholar
  23. 23.
    Haber, J., Kähler, K., Seidel, H.: Geometry-Based Muscle Modeling for Facial Animation. In: Proc. Inter. Conf. on Graphics Interface, pp. 27–36 (2001)Google Scholar
  24. 24.
    Kakumanu, P., Gutierrez-Osuna, R., Esposito, A., Bryll, R., Goshtasby, A., Garcia, O.N.: Speech Driven Facial Animation. In: Proc. of ACM Workshop on Perceptive User Interfaces, Orlando, 15-16 November, pp. 1–4 (2001)Google Scholar
  25. 25.
    Morishima, S.: Face Analysis and Synthesis. IEEE Signal Processing Mag. 18(3), 26–34 (2001)CrossRefGoogle Scholar
  26. 26.
    O’Reilly, W.S.N.: Believable Social and Emotional Agents. Ph.D. Thesis at the Carnegie Mellon University, Pittsburgh, PA (1996)Google Scholar
  27. 27.
    Ostermann, J.: Animation of Synthetic Face in MPEG-4. In: Proc. of Computer Animation, Philadelphia, June 8-10, pp. 49–51 (1998)Google Scholar
  28. 28.
    Overview of the MPEG-4 Standard, ISO/IEC JTC1/SC29/WG11 M2725, Seoul, South Korea (1999)Google Scholar
  29. 29.
    Rizzo, P.: Emotional Agents for User Entertainment: Discussing the Underlying Assumptions. In: Proc. of the Inter. Workshop on Affect in Interactions, the EC I3 Programme, Siena (1999)Google Scholar
  30. 30.
    Bourbakis, N.: Associating Facial Expressions for Identifying Individuals of Using Stochastic Petri-Nets for the Blind. In: IEEE Int. Conf. TAI-08, Dayton OH (November 2008)Google Scholar
  31. 31.
    AR Face Database, Purdue University, W-Lafayette, USA Google Scholar
  32. 32.
    Pantic, M., Patras, I., Rothkrantz, L.J.M.: Facial Action Recognition in Face Profile Image Sequences. In: Proceedings IEEE International Conference Multimedia and Expo, pp. 37–40 (2002), http://citeseer.ist.psu.edu/pantic02facial.html
  33. 33.
    Cohn, J., Kanade, T., Tian, Y.: Comprehensive Database for Facial Expression Analysis. In: Proc. of the 4th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2000), pp. 46–53 (March 2000)Google Scholar
  34. 34.
    Ekman, P., Frisen, W.: Facial Action Coding System. Consulting Psychologists Press, Palo Alto (1978)Google Scholar
  35. 35.
    Bartlett, M.S., Fasel, I., Littlewort, G., Movellan, J.R., Susskind, J.: Dynamics of Facial Expression Extracted Automatically from Video. In: Proc. IEEE CVPR, Workshop on Face Processing in Video (2004), http://citeseer.ist.psu.edu/711804.html
  36. 36.
    Esposito, A., Bourbakis, N.: The Role of Timing on the Speech Perception and Production Processes and its Effects on Language Impaired Individuals. In: Proc. Int. IEEE Symposium on BIBE-2006, WDC, October 16-18, pp. 348–356 (2006)Google Scholar
  37. 37.
    Bebis, G., Bourbakis, N.: Associating Motion-Patterns of Events from Video. In: Proc. IEEE Conf. on TAI-2006, WDC (November 13–15, 2006)Google Scholar
  38. 38.
    Bourbakis, N., Kakumanu, P.: Recognizing Facial Expressions Using Local-Global Graphs for Blind. In: Proc. IEEE Int. Conf. on TAI-2006, WDC (November 13–15, 2006)Google Scholar
  39. 39.
    Bourbakis, N., Esposito, A., Kavraki, D.: Analysis of Invariant Meta-features for Learning and Understanding Disable People’s Emotional Behavior Related to Their Health Conditions. In: IEEE Int. Symp. on BIBE-2006, WDC, pp. 357–369 (October 2006)Google Scholar
  40. 40.
    Esposito, A.: The Amount of Information on Emotional States Conveyed by the Verbal and Nonverbal Channels: Some Perceptual Data. In: Stylianou, Y., Faundez-Zanuy, M., Esposito, A. (eds.) COST 277. LNCS, vol. 4391, pp. 249–268. Springer, Heidelberg (2007)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Nikolaos Bourbakis
    • 1
  • Anna Esposito
    • 3
  • Despina Kavraki
    • 2
  1. 1.Wright State UniversityUSA
  2. 2.AIIS Inc.USA
  3. 3.Department of Psychology, and IIASSSecond University of NaplesItaly

Personalised recommendations