Journal of Signal Processing Systems

, Volume 75, Issue 2, pp 155–168 | Cite as

Automatic Facial Expression Exaggeration System with Parallelized Implementation on a Multi-Core Embedded Computing Platform

  • Te-Feng Su
  • Chih-Hsueh Duan
  • Shu-Fan Wang
  • Yu-Tzu Lee
  • Shang-Hong Lai
Article
  • 286 Downloads

Abstract

In this paper, we propose an automatic facial expression exaggeration system, which consists of face detection, facial expression recognition, and facial expression exaggeration components, for generating exaggerated views of different expressions for an input face video. In addition, the parallelized algorithms for the automatic facial expression exaggeration system are developed to reduce the execution time on a multi-core embedded system. The experimental results show satisfactory expression exaggeration results and computational efficiency of the automatic facial expression exaggeration system under cluttered environments. The quantitative experimental comparisons show that the proposed parallelization strategies provide significant computational speedup compared to the single-processor implementation on a multi-core embedded platform.

Keywords

Face detection Facial expression recognition Facial expression exaggeration Multi-Core embedded system 

References

  1. 1.
    Dubey, P. (2005). A platform 2015 workload model recognition, mining and synthesis moves computers to the Era of Tera. Intel Corp. White Paper.Google Scholar
  2. 2.
    Lin, T.-J., Lin, C.-N., Tseng, S.-Y., Chu, Y.-H., Wu, A.-Y. (2008). Overview of itri pac project—from vliw dsp processor to multi-core computing platform. IEEE international symposium on VLSI design, automation and test (pp. 188–191).Google Scholar
  3. 3.
    Viola, P., & Jones, M. (2001). Rapid object detection using a boosted cascade of simple features. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1, IV511–IV518.Google Scholar
  4. 4.
    Turk, M., & Pentland, A. (1991). Eigenfaces for recognition. Jounal of Cognitive Neuroscience, 3(1), 71–86.CrossRefGoogle Scholar
  5. 5.
    Rowley, H., Baluja, S., Kanade, T. (1998). Neural network-based face detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(1), 23–38.CrossRefGoogle Scholar
  6. 6.
    Osuna, E., Freund, R., Girosit, F. (1997). Training support vector machines: an application to face detection. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 130–136.Google Scholar
  7. 7.
    Yang, M.-H., Kriegman, D., Ahuja, N. (2002). Detecting faces in images: a survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(1), 34–58.CrossRefGoogle Scholar
  8. 8.
    Sharma, B., Thota, R., Vydyanathan, N., Kale A. (2009). Towards a robust, real-time face processing system using cuda-enabled GPUs. In International conference on high performance computing.Google Scholar
  9. 9.
    Chen, S.-K., Lin, T.-J., Liu, C.-W. (2009). Parallel object detection on multicore platforms. In IEEE workshop on signal processing systems.Google Scholar
  10. 10.
    Sun, Y., & An, Y. (2010). Research on the embedded system of facial expression recognition based on HMM. In IEEE international conference on information management and engineering(ICIME).Google Scholar
  11. 11.
    Ekman, P., & Friesen, W.V. (1971). Constants across cultures in the face and emotion. Journal of Personality and Social Psychology, 17(2), 124–129.CrossRefGoogle Scholar
  12. 12.
    Song, M., Tao, D., Liu, Z., Li, X., Zhou, M. (2010). Image ratio features for facial expression recognition application. IEEE Transactions on Systems Man and Cybernetics Part B: Cybernetics, 40(3), 779–788.CrossRefGoogle Scholar
  13. 13.
    Zhao, G., & Pietikainen, M. (2007). Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(6), 915–928.CrossRefGoogle Scholar
  14. 14.
    Lee, Y.-B., Moon, S.-B., Kim, Y.-G. (2005). Face and facial expression recognition with an embedded system for human-robot interaction. Affective Computing and Intelligent Interaction, 3784, 271–278.CrossRefGoogle Scholar
  15. 15.
    Liang, L., Chen, H., Xu, Y.-Q., Shum, H.-Y. (2002). Example-based caricature generation with exaggeration. Society Proceedings of the Pacific Conference on Computer Graphics and Applications (pp. 386–393).Google Scholar
  16. 16.
    Mo, Z., Lewis, P., Neumann, U. (2004). Improved automatic caricature by feature normalization and exaggeration. ACM SIGGRAPH sketches (p. 57).Google Scholar
  17. 17.
    Chen, H., Xu, Y.-Q., Shum, H.-y., Zhu, S.-C., Zheng, N.-N. (2001). Example-based facial sketch generation with non-parametric sampling. IEEE International Conference on Computer Vision, 2, 433–438.Google Scholar
  18. 18.
    Liu, J., Chen, Y., Gao, W. (2006). Mapping learning in eigenspace for harmonious caricature generation. Proceedings of ACM international conference on Multimedia, 683–686.Google Scholar
  19. 19.
    Chang, Y., Hu, C., Turk, M. (2003). Manifold of facial expression. AMFG Workshop.Google Scholar
  20. 20.
    Chang, Y., Hu, C., Turk, M. (2004). Probabilistic expression analysis on manifolds. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2, II-520–II-527.Google Scholar
  21. 21.
    Hu, C., Chang, Y., Feris, R., Turk, M. (2004). Manifold based analysis of facial expression. IEEE workshop on face processing in video (p. 81).Google Scholar
  22. 22.
    Periaswamy, S., & Farid, H. (2003). Elastic registration in the presence of intensity variations. IEEE Transactions on Medical Imaging, 22(7), 865–874.CrossRefGoogle Scholar
  23. 23.
    Chen, D.C.-W., Liao, I-T., Lee, J.-K., Chen, W.-F., Tseng, S.-Y., Jen, C.-W. (2006). PAC DSP core and application processors. IEEE international conference on multimedia and expo (pp. 289–292).Google Scholar
  24. 24.
    Lin, Y.-C., Tang, C.-L., Wu, C.-J., Hung, M.-Y., You, Y.-P., Moo, Y.-C., Chen, S.-Y., Lee, J.-K. (2006). Compiler supports and optimizations for pac vliw dsp processors (pp. 466–474). Workshop on Languages and Compilers for Parallel Computing.Google Scholar
  25. 25.
    Sim, T., Baker, S., Bsat, M. (2003). The cmu pose, illumination, and expression database. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(12), 1615–1618.CrossRefGoogle Scholar
  26. 26.
    Intel Corporation (2005). Computer intenstive, highly parallel application and uses. Intel Technology Journal, 9(2), 5–6.Google Scholar
  27. 27.
    Liao, C.-T., Chuang, H.-J., Duan, C.-H., Lai, S.-H. (2010). Learning spatial weighting via quadratic programming for facial expression analysis. IEEE International Conference on Computer Vision and Pattern Recognition Workshop (CVPRW) (pp. 86–93).Google Scholar
  28. 28.
    Sirovich, L., & Kirby, M. (1987). Low-dimensional procedure for the characterization of human faces. Journal of the Optical Society of America, A4(3), 519–524.CrossRefGoogle Scholar
  29. 29.
    Kisacanin, B. (2005). Examples of low-level computer vision on media processors. IEEE Internation Conference on Computer Vision and Pattern Recognition Workshop (CVPRW) (p. 135).Google Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  • Te-Feng Su
    • 1
  • Chih-Hsueh Duan
    • 1
  • Shu-Fan Wang
    • 1
  • Yu-Tzu Lee
    • 2
  • Shang-Hong Lai
    • 1
  1. 1.Department of Computer ScienceNational Tsing Hua UniversityHsinchu CityTaiwan
  2. 2.Institute of Information Systems and ApplicationsNational Tsing Hua UniversityHsinchu CityTaiwan

Personalised recommendations