Automatic FAPs Determination and Expressions Synthesis

  • Narendra Patel
  • Mukesh A. Zaveri
Part of the Advances in Intelligent and Soft Computing book series (AINSC, volume 166)

Abstract

This paper presents a novel method that automatically generates facial animation parameters (FAPs) as per MPEG 4 standard using a frontal face image. The proposed method extracts facial features like eye, eyebrow, mouth, nose etc. and these 2D features are used to evaluate facial parameters, namely called facial definition parameters using generic 3D face model. We determine FAPs by finding the difference between displacement of FDPs in specific expression face model and neutral face model. These FAPs are used to generate six basic expressions for any person with neutral face image. Novelty of our algorithm is that when expressions are mapped to another person it also captures expression detail such as wrinkle and creases. These FAPs can be used for expression recognition. We have tested and evaluated our proposed algorithm using standard database, namely, BU-3DFE.

Keywords

a generic 3D model expression texture FAPs FDPs MPEG-4 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Krindis, S., Pitas, I.: Statistical analysis of human facial expressions. Journal of Information Hiding and Multimedia Signal Processing 1(3), 241–260 (2010)Google Scholar
  2. 2.
    Pandzic, S., Komiya, R., Forchheimer, R.: MPEG-4 facial animation: the standard, implementation and applications. John Wiley and Sons (2002) ISBN: 0-470-84465-5Google Scholar
  3. 3.
    Sarris, N., Strintzis, M.G.: 3D modeling and animation: synthesis and analysis Techniques for the human body, illustrated edition, March 22. IRM press (2005)Google Scholar
  4. 4.
    Kim, J.W.: Automatic FDP/FAP generation from an image sequence. In: IEEE International Symposium on Circuits and Systems, May 28-31, vol. 1, pp. 40–43 (2000)Google Scholar
  5. 5.
    Ravyse, I., Sahli, H.: Facial Analysis and Synthesis Scheme. In: Blanc-Talon, J., Philips, W., Popescu, D., Scheunders, P. (eds.) ACIVS 2006. LNCS, vol. 4179, pp. 810–820. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  6. 6.
    Sheng, Y., Sadka, A.H., Kondoz, A.M.: An automatic algorithm for facial feature extraction in video applications. In: Proc. of 5th International Workshop on Image Analysis for Multimedia Interactive Services, lisbon, Portugal, April 21-23 (2004)Google Scholar
  7. 7.
    Xu, C., Prince, J.: Snakes, shapes and gradient vector flow. IEEE Trans. on Image Processing 17(3), 359–369 (1998)MathSciNetGoogle Scholar
  8. 8.
    Ip, H.H.S., Yin, L.: Constructing a 3D individualized head model from two orthogonal views. The Visual Computer 12(5), 254–266 (1996)CrossRefGoogle Scholar
  9. 9.
    Feng, G.C., Yuen, P.C.: Recognition of head and shoulder face image using virtual frontal view image. IEEE Trans. Systems, Man and Cybernetics, Part A 30(6), 871–882 (2000)CrossRefGoogle Scholar
  10. 10.
    Hsu, R.L., Abdel-Mottaleb, M., Jain, A.K.: Face detection in color image. IEEE Transactions on Pattern Analysis and Machine Intelligence 24(5), 696–706 (2002)CrossRefGoogle Scholar
  11. 11.
    Goecke, R., Millar, J.B., Zelinsky, A., Robert-Ribes, J.: Automatic Extraction of Lip Feature Points. In: Proceedings of the Australian Conference on Robotics and Automation ACRA, Melbourne, Australia, August 30 -September 1, pp. 31–36 (2000)Google Scholar
  12. 12.
    Hulbert, A., Poggio, T.: Synthesizing a colour algorithm from examples. Sciences 239, 482–485 (1998)CrossRefGoogle Scholar
  13. 13.
    Patel, N.M., Patel, P., Zaveri, M.: Parametric model based facial animation synthesis. In: International Conference on Emerging Trends in Computing, Kamraj College of Engg. & Tech., Tamilnadu, India, January 8-10 (2009)Google Scholar
  14. 14.
    Sheng, Y., Sadka, A.H., Kondoz, A.M.: Automatic single view based 3D face synthesis for unsupervised multimedia applications. IEEE Transactions on Circuits and System for Video Technology 18(17), 961–974 (2008)CrossRefGoogle Scholar
  15. 15.
    Zhang, Y., Zhu, Z., Yi, B.: Dynamic facial expression analysis and synthesis with MPEG-4 facial animation parameters. IEEE Transactions and Systems for Video Technology 18(10), 1383–1396 (2008)CrossRefGoogle Scholar
  16. 16.
    Patel, N.M., Zaveri, M.: 3D Facial Model Reconstruction, Expressions Synthesis and Animation using single frontal face image. International Journal of Signal, Image and Video Processing (SIVP) (2011), doi:10.1007/s11760-011-0278-9Google Scholar
  17. 17.
    Liu, Z., Shan, Y., Zhang, Z.: Expressive expression mapping with ratio images. In: SIGGRAPH, Los angles, August 12-17, pp. 271–276 (2001)Google Scholar
  18. 18.
    Yin, L., Sun, X., Wang, Y., Rosato, M.J.: A 3D facial expression database for facial behavior research. In: Proc. of Int. Conf. on FGR, UK, April 2-6, pp. 211–216 (2006)Google Scholar

Copyright information

© Springer-Verlag GmbH Berlin Heidelberg 2012

Authors and Affiliations

  • Narendra Patel
    • 1
    • 2
  • Mukesh A. Zaveri
    • 1
    • 2
  1. 1.Department of Computer EnggBVM Engg. CollegeV.V. NagarIndia
  2. 2.Department of Computer EnggSVNITSuratIndia

Personalised recommendations