Advertisement

Head Movement Quantification and Its Role in Facial Expression Study

  • Fakhrul Hazman YusoffEmail author
  • Rahmita Wirza O. K. Rahmat
  • Md. Nasir Sulaiman
  • Mohamed Hatta Shaharom
  • Hariyati Shahrima Abdul Majid
Chapter
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 52)

Abstract

Temporal modeling of facial expression has been the interest of various fields of studies such as expression recognition, realism in computer animation and behavioral study in psychological field. While various researches are actively being conducted to capture the movement of facial features for its temporal property, works in term of the head movement during the facial expression process is lacking. The absence of head movement description will make expression description to be incomplete especially in expression that involves head movement such as disgust. Therefore, this paper proposes a method to track the movement of the head by using a dual pivot head tracking system (DPHT). In proving its usefulness, the tracking system will then be applied to track the movement of subjects depicting disgust. A simple statistical two-tailed analysis and visual rendering comparison will be made with a system that uses only a single pivot to illustrate the practicality of using DPHT. Results show that better depictions of expression can be implemented if the movement of the head is incorporated in the facial expression study.

Keywords

Face expression modeling Computer graphics Face tracking Face animation 

References

  1. 1.
    Ansari, A., & Abdel-Mottaleb, M. (2005). Automatic facial feature extraction and 3D face modeling using two orthogonal views with application to 3D face recognition. Pattern Recognition, 38(2005), 2549–2563.CrossRefGoogle Scholar
  2. 2.
    Arya, A., & DiPaola, S. (2007). Face modeling and animation language for MPEG-4 XMT framework. IEEE Transactions on Multimedia, 9(6), 1137–1146.CrossRefGoogle Scholar
  3. 3.
    Bartlett, M.A., Hager, J.C., Ekman P., & Sejnowski, T. (Mar 1999). Measuring facial expressions by computer image analysis. International Journal of Psychophysiology, 36(2), 253–263.CrossRefGoogle Scholar
  4. 4.
    Cohn, J.F., Schmidt, K., Gross, R., & Ekman, P. (2002). Individual differences in facial expression: Stability over time, relation to self-reported emotion and ability to inform person identification. In IEEE International Conference on Multimodal Interfaces (ICMI 2002), pp. 491–496. PA: Pittsburgh.Google Scholar
  5. 5.
    Devernay, F., & Faugeras, O. (2001). Straight lines have to be straight. Machine vision and application (pp. 14–24). Springer.Google Scholar
  6. 6.
    Dornaika, F., & Ahlberg, J. (2004). Face and facial feature tracking using deformable models. International Journal of Image and Graphics (IJIG), 4(3), 499–532.CrossRefGoogle Scholar
  7. 7.
    Efford, N. (2000). Digital image processing – a practical introduction using Java. Essex, England: Addison Wesley.Google Scholar
  8. 8.
    Ekman, P., & Friesen, W.V. (1978). Facial action coding system – a technique for the measurement of facial movement. Palo Alto California: Consulting Psychologist Press.Google Scholar
  9. 9.
    Ekman, P., & Rosenberg, E. (1997). What face reveals – basic and applied studies of spontaneous expression using the facial action coding system (FACS). New York: Oxford University Press.Google Scholar
  10. 10.
    Ekman, P., Friesen, W.V., Joseph, C. (2002). Facial action coding system – the manual on CD ROM. Utah, USA: Research Nexus Division of Network Information Research Corporation.Google Scholar
  11. 11.
    Essa, I.A., & Pentland, A.P. (1995). Facial expression recognition using a dynamic model and motion energy. In Proceedings of the Fifth International Conference on Computer Vision, pp. 360–367. USA.Google Scholar
  12. 12.
    Fua, P., Plankers, R., & Thalmann, D. (1999). From synthesis to analysis: Fitting human animation models to image data. In Proceedings of the Computer Graphics International, pp. 4–11.Google Scholar
  13. 13.
    Hsu, S.H., Kao, C.K., & Wu, M. (2009). Design facial appearance for roles in video games. Expert System with Applications, 36(3), 4929–4934, Pergamon Press.CrossRefGoogle Scholar
  14. 14.
    Jones, A., & Bonney, S. (2000). 3D Studio Max 3 – Professional Animation. New Riders, Indiana, USA.Google Scholar
  15. 15.
    Kapoor, A., & Picard, R.W. (2001). A real-time head nod and shake detector. In Proceedings of the 2001 workshop on perceptive user interfaces, pp. 1–5. Orlando, Florida, USA.Google Scholar
  16. 16.
    Lee, S., & Terzopoulos, D. (2006). Heads up! Biomechanical modeling and neuromuscular control of the neck. ACM SIGGRAPH 2006 papers (pp. 1188–1198). ACM.Google Scholar
  17. 17.
    Lim, I.S., & Thalmann, D. (2002). Construction of animation models out of captured data. In Proceedings of the IEEE International Conference on Multimedia and Expo, pp.829–832.Google Scholar
  18. 18.
    Mao, C., Qin, S.F., & Wright, D. (2006). Sketching-out virtual humans: From 2D storyboarding to immediate 3D character animation. In Proceedings of the International Conference on Advances in Computer Entertainment Technology, pp. 61.Google Scholar
  19. 19.
    Marieb, E. (2006). Essentials of human anatomy and physiology (pp. 150–151). San Fransisco, USA: Pearson.Google Scholar
  20. 20.
    Osipa, J. (2007). Stop staring – face modeling and animation done right. Indiana, USA: Sybex, Wiley.Google Scholar
  21. 21.
    Pandzic, I.S., & Forenheimer R. (Ed.) (2002). MPEG-4 facial animation – the standard, implementation and application. England: Wiley.Google Scholar
  22. 22.
    Pantic, M., & Patras, I. (2004). Temporal modeling of facial actions from face profile image sequences. In IEEE International Conference on Multimedia and Expo, pp. 49–52.Google Scholar
  23. 23.
    Pantic, M., & Patras, I. (2005). Detecting facial actions and their temporal segments in nearly frontal-view face image sequences. In IEEE International Conference on Systems, Man and Cyberneticsi pp. 3358–3363.Google Scholar
  24. 24.
    Park, I.K., Zhang, H., & Vezhevets, V. (2005). Image-based 3D face modeling system. EURASIP Journal on Applied Signal Processing, 13, 2072–2090.Google Scholar
  25. 25.
    Parkinson, B. (1995). Ideas and realities of emotion. London, U.K.: Routledge.Google Scholar
  26. 26.
    Ralf, P., Fua, P., Apuzzo, N.D. (1999). Automated body modeling for video sequences. In Proceedings of the IEEE International Workshop on Modelling People, pp. 45–52.Google Scholar
  27. 27.
    Raouzaiou, A., Tsapatsoulis, N., Karpouzis K., & Kollias, S. (2002). Parameterized facial expression synthesis based on MPEG-4 EURASIP. Journal on Applied Signal Processing, 10, 1021–1038.zbMATHCrossRefGoogle Scholar
  28. 28.
    Ratner, P. (2003). 3-D Human modeling and animation. New Jersey, USA: Wiley.Google Scholar
  29. 29.
    Reisenzein, R., Bördgen, S., Holtbernd, T., & Matz, D. (2006). Evidence for strong dissociation between emotion and facial displays: The case of surprise. Journal of Personality and Social Psychology, 91(2), 295–315, American Psychological Association, United States.CrossRefGoogle Scholar
  30. 30.
    Roberts, S. (2004). Character animation – 2D skills for better 3D. Massachusetts, USA: Focal.Google Scholar
  31. 31.
    Sato, H., Ohya, J., & Terashima, N. (2004). Realistic 3D facial animation using parameter-based deformation and texture mapping. In Proceedings of the 6th IEEE International Conference on Automatic Face and Gesture Recognition, pp. 735–742.Google Scholar
  32. 32.
    Seyedarabi, H., Aghagolzadeh, A., & Khanmohammadi, S. (2004). Facial expressions animation and lip tracking using facial characteristic points and deformable model. International Journal of Information Technology, 1(4), 165–168.Google Scholar
  33. 33.
    Tian, Y., Kanade, T., & Cohn, J.F. (2001). Recognizing AU fro facial expression analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(2), 97–115.CrossRefGoogle Scholar
  34. 34.
    Utsumi, A., Kawato, S., & Abe, S. (2005). Attention monitoring based on temporal-signal behavior structures. In Proceedings of the IEEE International Workshop on Human-Computer Interaction, pp. 100–109.Google Scholar
  35. 35.
    Valstar, M., & Pantic, M. (2006). Fully automatic facial action unit detection and temporal analysis. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshop, pp. 149–156.Google Scholar
  36. 36.
    Yip, B., & Jin, J.S. (2004). Viewpoint determination and pose determination of human head in video conferencing based on head movement. In Proceedings of the 10th International Multi-Media Modeling Conference, pp. 130–135. Brisbane, Australia.Google Scholar
  37. 37.
    Yusoff, F., Rahmat, R., Sulaiman, M., Shaharom, M., & Majid, H. (2009). 3D based head movement tracking for incorporation in facial expression system. International Journal of Computer Science and Network Security, 9(2), 417–424.Google Scholar

Copyright information

© Springer Science+Business Media B.V. 2009

Authors and Affiliations

  • Fakhrul Hazman Yusoff
    • 1
    Email author
  • Rahmita Wirza O. K. Rahmat
    • 1
  • Md. Nasir Sulaiman
    • 1
  • Mohamed Hatta Shaharom
    • 1
  • Hariyati Shahrima Abdul Majid
    • 1
  1. 1.Universiti Teknologi MARA, Universiti Putra Malaysia, Cyberjaya University Collge of Medical Sciences, International Islamic UniversityShah AlamMalaysia

Personalised recommendations