Advertisement

Virtual Reality

, Volume 17, Issue 3, pp 219–237 | Cite as

An automatic method for motion capture-based exaggeration of facial expressions with personality types

  • Seongah ChinEmail author
  • Chung Yeon Lee
  • Jaedong Lee
Original Article

Abstract

Facial expressions have always attracted considerable attention as a form of nonverbal communication. In visual applications such as movies, games, and animations, people tend to be interested in exaggerated expressions rather than regular expressions since the exaggerated ones deliver more vivid emotions. In this paper, we propose an automatic method for exaggeration of facial expressions from motion-captured data with a certain personality type. The exaggerated facial expressions are generated by using the exaggeration mapping (EM) that transforms facial motions into exaggerated motions. As all individuals do not have identical personalities, a conceptual mapping of the individual’s personality type for exaggerating facial expressions needs to be considered. The Myers–Briggs type indicator, which is a popular method for classifying personality types, is employed to define the personality-type-based EM. Further, we have experimentally validated the EM and simulations of facial expressions.

Keywords

Facial expressions Exaggeration Facial motion capture Facial motion cloning Personality MBTI Nonnegative matrix factorization 

Notes

Acknowledgments

This research was partially supported by the Korea Research Foundation Grant fund (KRF-521-D00398).

References

  1. Ahn S, Ozawa S (2004) Generating facial expressions based on estimation of muscular contraction parameters from facial feature points. In: Proceedings of IEEE international conference on systems, man and cybernetics, vol 1, pp 660–665Google Scholar
  2. Bregler C, Loeb L, Chuang E, Deshpande H (2002) Turning to the masters: motion capturing cartoons. ACM SIGGRAPH 2002 Trans Graph 21(3):399–407Google Scholar
  3. Brennan SE (1982) Caricature generator: the dynamic exaggeration of faces by computer. Leonardo, The MIT Press, 18(3):170–178Google Scholar
  4. Calder AJ, Young AW, Rowland D, Perrett DI (1997) Computer-enhanced emotion in facial expressions. In: Proceedings of the royal society b: biological sciences, vol 264, no 1383, pp 919–925Google Scholar
  5. Caldera AJ, Rowland D, Young AW, Nimmo-Smith I, Keane J, Perrett DI (2000) Caricaturing facial expressions. Cognition 76(2):105–146CrossRefGoogle Scholar
  6. Cao Y, Faloutsos P, Pighin F (2003) Unsupervised learning for speech motion editing. In: Proceedings of ACM SIGGRAPH/eurographics symposium on Computer animation, pp 225–231Google Scholar
  7. Chai JX, Xiao J, Hodgins J (2003) Vision-based control of 3D facial animation. In: Eurographics/SIGGRAPH symposium on computer animationGoogle Scholar
  8. Chen H, Liu Z, Rose C, Xu Y, Shum H, Salesin D (2004) Example-Based Composite Sketching of Human Portraits. In: Proceedings of the 3rd international symposium on non-photorealistic animation and rendering, pp 95–153Google Scholar
  9. Chin S, Kim KY (2009) Emotional intensity-based facial expression cloning for low polygonal applications. IEEE Trans Syst Man Cybern C 39(3):315–330CrossRefGoogle Scholar
  10. Chin S, Lee CY (2010) Exaggeration of facial expressions from motion capture data. Chin Opt Lett 8(1):29–32CrossRefGoogle Scholar
  11. Chin S, Lee CY (2013) Personality trait and facial expression filter-based brain-computer interface. Int J Adv Rob Syst 10:138Google Scholar
  12. Chin S, Lee CY, Lee J (2009) Personal Style and non-negative matrix factorization based exaggerative expressions of face. In: Proceedings of the 2009 international conference on computer graphics and virtual Reality, pp 91–96Google Scholar
  13. Chuang ES (2004) Analysis, synthesis, and retargeting of facial expressions. Ph.D. dissertation, Stanford University, Palo Alto, CAGoogle Scholar
  14. Clarke L, Chen M (2011) Automatic generation of 3D caricatures based on artistic deformation styles. IEEE Trans Vis Comput Graph 17(6):808–821CrossRefGoogle Scholar
  15. Cohen I, Sebe N, Garg A, Chen LS, Huang TS (2003) Facial expression recognition from video sequences: temporal and static modeling. Comput Vis Imag Underst 91(1–2):160–187CrossRefzbMATHGoogle Scholar
  16. Cootes TF, Edwards GJ, Taylor CJ (2001) Active appearance models. IEEE Trans Pattern Anal Mach Intell 23(6):681–685CrossRefGoogle Scholar
  17. Deng Z, Neumann U (2006) eFASE: expressive facial animation synthesis and editing with phoneme-isomap controls. In: Proceedings of ACM SIGGRAPH/EG symposium on computer animation, pp 251–259Google Scholar
  18. Ekman P (1972) Universal and cultural differences in facial expressions of emotion. In: Nebraska symposium on motivation, vol 38, pp 207–283Google Scholar
  19. Ekman P, Friesen WV (2003) Unmasking the face: a guide to recognizing emotions from facial clues. Malor Books, Cambridge, MAGoogle Scholar
  20. Gardner W, Martinko M (1996) Using the Meyers–Briggs Type Indicator to study managers: a literature review and research agenda. J Manag 22(1):45–83CrossRefGoogle Scholar
  21. Hoyer PO (2004) Non-negative Matrix Factorization with Sparseness Constraints. J Mach Learn Res 5:1457–1469MathSciNetzbMATHGoogle Scholar
  22. Keltner D, Ekman P (1996) Affective intensity and emotional responses. Cogn Emot 10(3):323–328CrossRefGoogle Scholar
  23. Keltner D, Ekman P (2004) Emotional expression, and the art of empirical epiphany. J Res Pers 38:37–44CrossRefGoogle Scholar
  24. Kotsia I, Pitas I (2007) Facial expression recognition in image sequences using geometric deformation features and support vector machines. IEEE Trans Imag Process 16(1):172–187MathSciNetCrossRefGoogle Scholar
  25. Kshirsagar S, Magnenat-Thalmann N (2002) A multilayer personality model. In: Proceedings of 2nd international symposium on smart graphics, ACM Press, pp 107–115Google Scholar
  26. Lee DD, Seung HS (1999) Learning the parts of objects by non-negative matrix factorization. Nature 401:788–791CrossRefGoogle Scholar
  27. Lewiner T, Vieira T, Martínez D, Peixoto A, Mello V, Velho L (2011) Interactive 3D caricature from harmonic exaggeration. Comput Graph 35(3):586–595CrossRefGoogle Scholar
  28. Lewis JP, Cordner M, Fong N (2000) Pose space deformation: a unified approach to shape interpolation and skeleton-driven deformation. In: Proceedings of ACM SIGGRAPH 2000 international conference on computer graphics and interactive techniques, New Orleans, LO, pp 165–172Google Scholar
  29. Liang L, Chen H, Xu Y, Shum H (2002) Example-based caricature generation with exaggeration. In: Proceedings of the 10th Pacific conference of computer graphics and application, p 386Google Scholar
  30. Liu S, Wang J, Zhang M, Wang Z (2012) Three-dimensional cartoon facial animation based on art rules. The visual computer, ISSN: 0178-2789, pp 1–15Google Scholar
  31. Ma X, Le B, Deng Z (2009) Style learning and transferring for facial animation editing. In: Proceedings of ACM SIGGRAPH/Eurographics symposium on computer animation, pp 123–132Google Scholar
  32. Martin RA, Berry GE, Dobranski T, Horne M, Dodgson PG (1996) Emotion perception threshold: individual differences in emotional sensitivity. J Res Pers 30(2):290–305CrossRefGoogle Scholar
  33. Miranda JC, Alvarez X, Orvalho J, Gutierrez D, Sousa AA, Orvalho V (2012) Sketch express: a sketching interface for facial animation. Comput Graph 36(6):585–595CrossRefGoogle Scholar
  34. Mo Z, Lewis JP, Neumann U (2004) Improved Automatic Caricature by Feature Normalization and Exaggeration. In: Proceedings of ACM SIGGRAPH 2004 sketches international conference on computer graphics and interactive techniques, pp 57–59Google Scholar
  35. Morand DA (2001) The emotional intelligence of managers: assessing the construct validity of a nonverbal measure of people skill. J Bus Psychol 16(1):21–33CrossRefGoogle Scholar
  36. Myers I, McCaulley M, Quenk N, Hammer A (1998) In MBTI manual: a guide to the development and use of the Myers–Briggs type indicator. Consulting Psychologists Press, Palo Alto, CAGoogle Scholar
  37. Noh JY, Neumann U (2001) Expression cloning. In: Proceedings of ACM SIGGRAPH 2001 international conference on computer graphics and interactive techniques, pp 277–288Google Scholar
  38. Paatero P, Tapper U (1994) Positive matrix factorization: a non-negative factor model with optimal utilization of error estimates of data values. Environmetrics 5(2):111–126CrossRefGoogle Scholar
  39. Pandzic IS, Forchheimer R (2002) In MPEG-4 facial animation the standard, implementation and application. Wiley, Southern GateCrossRefGoogle Scholar
  40. Parke FI (1972) Computer generated animation of faces. In: Proceedings of ACM annual conference, vol 1, pp 451–457Google Scholar
  41. Parke FI, Waters K (2008) Computer facial animation, 2nd edn. Wellesley, MA, A K PetersGoogle Scholar
  42. Pighin F, Hecker J, Lischinski D, Szeliski R, Salesin D (1995) “Synthesizing realistic facial expressions from photographs. In: Proceedings of SIGGRAPH 1998 international conference on computer graphics and interactive techniques, San Antonio, TX, pp 75–84Google Scholar
  43. Platt SM, Badler NI (1981) Animating facial expression. In: Proceedings of ACM SIGGRAPH computer graphics, vol 15, no 3, pp 245–252Google Scholar
  44. Ratner P (2003) 3-D human modeling and animation, 2nd edn. Wiley, New YorkGoogle Scholar
  45. Redman L (1984) How to draw caricatures. McGraw-Hill, New YorkGoogle Scholar
  46. Rhodes G (1997) Superportraits. Psychology Press, Hove, East Sussex, UKGoogle Scholar
  47. Salovey P, Mayer JD (1990) Emotional intelligence. Imaginat Cognit Pers 9(3):185–211CrossRefGoogle Scholar
  48. Scherer KR (1979) Nonlinguistic vocal indicators of emotion and psychopathology. Emot Pers Psychopathol, New York, pp 493–529CrossRefGoogle Scholar
  49. Sifakis E, Neverov I, Fedkiw R (2005) Automatic Determination of Facial Muscle Activations from Sparse Motion Capture Marker Data. ACM SIGGRAPH 2005 Trans Graph 24(3):417–425CrossRefGoogle Scholar
  50. Soon A, Lee WS (2006) Shape-based detail-preserving exaggeration of extremely accurate 3D faces. Vis Comput 22(7):478–492CrossRefGoogle Scholar
  51. Tarantili VV, Halazonetis DJ, Spyropoulos MN (2005) The spontaneous smile in dynamic motion. Am J Orthod Dentofac Orthop 128(1):8–15CrossRefGoogle Scholar
  52. Terzopoulos D, Waters K (1990) Physically-based facial modeling, analysis, and animation. J Vis Computer Anim 1(4):73–80CrossRefGoogle Scholar
  53. Wang SF, Lai SH (2010) Manifold-based 3D face caricature generation with individualized facial feature extraction. Pac Graph 29(7):2161–2168Google Scholar
  54. Wang Y, Huang X, Lee CS, Zhang S, Li Z, Samaras D, Metaxas D, Elgammal A, Huang P (2004) High resolution acquisition, learning and transfer of dynamic 3-D facial expressions. Eurograph 2004 Comput Graph Forum 23(3):677–686CrossRefGoogle Scholar
  55. Waters K (1987) A muscle model for animating three-dimensional facial expressions. In: Proceedings of ACM SIGGRAPH 1987 computer graphics, vol 21, no 4, pp 17–24Google Scholar
  56. Zhang Q, Liu Z, Guo B, Terzopoulos D, Shum H (2006a) Geometry-driven photorealistic facial expression synthesis. IEEE Trans Vis Comput Graph 12(1):48–60CrossRefGoogle Scholar
  57. Zhang Q, Liu Z, Guo B, Terzopoulos D, Shum HY (2006b) Geometry-driven photorealistic facial expression synthesis. IEEE Trans Vis Comput Graph 12(1):48–60CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London 2013

Authors and Affiliations

  1. 1.Division of Multimedia, College of EngineeringSungkyul UniversityAnyangSouth Korea
  2. 2.Biointelligence Laboratory, School of Computer Science and EngineeringSeoul National UniversitySeoulSouth Korea
  3. 3.DXP Lab., Department of Computer Science, College of EngineeringKorea UniversitySeoulSouth Korea

Personalised recommendations