Advertisement

FE8R - A Universal Method for Face Expression Recognition

  • Majida Albakoor
  • Khalid Saeed
  • Mariusz Rybnik
  • Mohamad Dabash
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9842)

Abstract

This paper proposes a new method for recognition of face expressions, called FE8R. We studied 6 standard expressions: anger, disgust, fear, happiness, sadness, surprise, and additional two: cry and natural. For experimental evaluation samples from MUG Facial Expression Database and color FERET Database were taken, with addition of cry expression. The proposed method is based on the extraction of characteristic objects from images by gradient transformation depending on the coordinates of the minimum and maximum points in each object on the face area. The gradient is ranked in \([-15,+35]\) degrees. Essential objects are studied in two ways: the first way incorporates slant tracking, the second is based on feature encoding using BPCC algorithm with classification by Backpropagation Artificial Neural Networks. The achieved classification rates have reached 95 %. The second method is proved to be fast and producing satisfactory results, as compared to other approaches.

Keywords

Face expression Feature extraction Feature encoding Slant tracking Artificial Neural Networks 

Notes

Acknowledgments

This work was supported by grant number S/WI/1/2013 from Bialystok University of Technology and funded from the resources for research by Ministry of Science and Higher Education. The work was also partially supported by NeiTec.

References

  1. 1.
    Gu, H., Su, G., Du, C.: Feature points extraction from face. In: Proceedings of Conference on Image and Vision Computing (2003)Google Scholar
  2. 2.
    Zeng, Z., Pantic, M., Roisman, G.I., Huang, T.S.: A survey of affect recognition methods: audio, visual, and spontaneous expressions. IEEE Trans. Pattern Anal. Mach. Intell. 31, 39–58 (2009)CrossRefGoogle Scholar
  3. 3.
    Hedaoo, S.V., Katkar, M.D., Khandait, S.P.: Feature tracking and expression recognition of face using dynamic Bayesian network. Int. J. Eng. Trends Technol. (IJETT) 8(10), 517–521 (2014)CrossRefGoogle Scholar
  4. 4.
    Gao, J., Fan, L., Li-zhong, X.: Median null(\(s_w\))-based method for face feature recognition. Appl. Math. Comput. 219(12), 6410–6419 (2013)MathSciNetzbMATHGoogle Scholar
  5. 5.
    Cui, Y., Fan, L.: Feature extraction using fuzzy maximum margin criterion. Neurocomputing 86, 52–58 (2012)CrossRefGoogle Scholar
  6. 6.
    Gordon, G.: Face recognition based on depth maps and surface curvature. In: SPIE Geometric Methods in Computer Vision, pp. 234–247 (1991)Google Scholar
  7. 7.
    Saeed, K.: Object classification and recognition using toeplitz matrices. In: Sołdek, J., Drobiazgiewicz, L. (eds.) Artificial Intelligence and Security in Computing Systems. The Kluwer International Series in Engineering and Computer Science, vol. 752, pp. 163–172. Kluwer Academic Publishers, Massachusetts (2003)CrossRefGoogle Scholar
  8. 8.
    Saeed, K., Albakoor, M.: Region growing based segmentation algorithm for typewritten and handwritten text recognition. Appl. Soft Comput. 9(2), 608–617 (2009)CrossRefGoogle Scholar
  9. 9.
    Aifanti, N., Papachristou, C., Delopoulos, A.: The MUG facial expression database. In: Proceedings of the 11th International Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS), Desenzano, Italy, April 2010Google Scholar
  10. 10.
    Phillips, P.J., Moon, H., Rauss, P.J., Rizvi, S.: The FERET evaluation methodology for face recognition algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 22(10), 1090–1104 (2000)CrossRefGoogle Scholar
  11. 11.
    Pantic, M.: Facial expression recognition. In: Li, S.Z., Jain, A. (eds.) Encyclopedia of Biometrics, pp. 400–406. Springer, Heidelberg (2009)Google Scholar
  12. 12.
    Keltner, D., Ekman, P.: Facial expression of emotion. In: Lewis, M., Haviland-Jones, J.M. (eds.) Handbook of Emotions, pp. 236–249. Guilford Press, New York (2000)Google Scholar
  13. 13.
    Chen, Y., Zhang, S., Zhao, X.: Facial expression recognition via non-negative least-squares sparse coding. Information 5, 305–331 (2014). Open AccessCrossRefGoogle Scholar
  14. 14.
    Lin, K., Cheng, W., Li, J.: Facial expression recognition based on geometric features and geodesic distance. Int. J. Sig. Process. 7(1), 323–330 (2014)Google Scholar
  15. 15.
    Kumbhar, M., Jadhav, A., Patil, M.: Facial expression recognition based on image feature. Int. J. Comput. Commun. Eng. 1(2), 117–119 (2012)CrossRefGoogle Scholar
  16. 16.
    Brunelli, R., Poggio, T.: Face recognition: features versus templates. IEEE Trans. Pattern Anal. Mach. Intell. 15(10), 1042–1052 (1993)CrossRefGoogle Scholar
  17. 17.
    Youssif, A., Asker, W.A.A.: Automatic facial expression recognition system based on geometric and appearance features. Comput. Inf. Sci. 4(2), 115 (2011). Canadian Center of Science and EducationGoogle Scholar
  18. 18.
    Bashyal, S., Venayagamoorthy, G.K.: Recognition of facial expressions using Gabor wavelets and learning vector quantization. J. Eng. Appl. Artif. Intell. 21, 1056–1064 (2008)CrossRefGoogle Scholar
  19. 19.
    Kumbhar, M., Patil, M., Jadhav, A.: Facial expression recognition using gabor wavelet. Int. J. Comput. Appl. 68(23), 0975–8887 (2013)Google Scholar
  20. 20.
    NabiZadeh, N., John, N.: Automatic facial expression recognition using modified wavelet-based salient points and Gabor-wavelet filters. In: Stephanidis, C. (ed.) HCII 2013, Part I. CCIS, vol. 373, pp. 362–366. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  21. 21.
    Guo, G., Dyer, C.R.: Learning from examples in the small sample case: face expression recognition. IEEE Trans. Syst. Man Cybern. Part B Cybern. 35(3), 477–488 (2005)CrossRefGoogle Scholar
  22. 22.
    Gomathi, V., Ramar, K., Jeevakumar, A.S.: Human facial expression recognition using MANFIS model. Int. J. Electr. Electron. Eng. 3(6), 335–339 (2009)Google Scholar
  23. 23.
    Gomathi, V., Ramar, K., Jeevakumar, A.S.: A neuro fuzzy approach for facial expression recognition using LBP histograms. J. Comput. Theory Eng. 2(3), 245–249 (2010)CrossRefGoogle Scholar
  24. 24.
    Khandait, S.P., Thool, R.C., Khandait, P.D.: Comparative analysis of ANFIS and NN approach for expression recognition using geometry method. J. Adv. Res. Comput. Sci. Softw. Eng. 2(3), 169–174 (2012)Google Scholar
  25. 25.
    Albakoor, M., Albakkar, A.A., Dabsh, M., Sukkar, F.: BPCC approach for Arabic letters recognition. In: Arabnia, H.R. (ed.) IPCV, pp. 304–308. CSREA Press (2006)Google Scholar
  26. 26.
    Saeed, K., Tabedzki, M., Rybnik, M., Adamski, M.: K3M: a universal algorithm for image skeletonization and a review of thinning techniques. Int. J. Appl. Math. Comput. Sci. 20(2), 317–335 (2010)zbMATHCrossRefGoogle Scholar
  27. 27.
    Mancas, M., Gosselin, B., Macq, B.: Segmentation using a region growing thresholding. In: Proceedings of the SPIE, vol. 5672, pp. 388–398 (pp. 12–13) (2005)Google Scholar
  28. 28.
    Tremeau, A., Borel, N.: A region growing and merging algorithm to color segmentation. Pattern Recogn. 30(7), 1191–1203 (1997)CrossRefGoogle Scholar
  29. 29.
    Gottesfeld Brown, L.: A survey of image registration techniques. ACM Comput. Surv. 24, 325–376 (1992)CrossRefGoogle Scholar
  30. 30.
    Saeed, K., Albakoor, M.: A new feature extraction method for TMNN-based Arabic character classification. Comput. Inform. 26(4), 403–420 (2007)zbMATHGoogle Scholar
  31. 31.
    Delac, K., Grgic, M.: Face Recognition. I-Tech Education and Publishing, Vienna (2007)CrossRefGoogle Scholar
  32. 32.
    Canny, J.: A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 8(6), 679–698 (1986)CrossRefGoogle Scholar
  33. 33.
    Hess, M., Martinez, M.: Facial feature extraction based on the smallest univalue segment assimilating nucleus (SUSAN) algorithm. In: Proceedings of Picture Coding Symposium (2004)Google Scholar
  34. 34.
    Barber, C.B., Dobkin, D.P., Huhdanpaa, H.: The quickhull algorithm for convex hulls. ACM Trans. Math. Softw. 22(4), 469–483 (1996)MathSciNetzbMATHCrossRefGoogle Scholar

Copyright information

© IFIP International Federation for Information Processing 2016

Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 2.5 International License (http://creativecommons.org/licenses/by-nc/2.5/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Authors and Affiliations

  • Majida Albakoor
    • 1
  • Khalid Saeed
    • 2
  • Mariusz Rybnik
    • 3
  • Mohamad Dabash
    • 4
  1. 1.Artificial Intelligence Department, Faculty of Information EngineeringDamascus UniversityDamascusSyria
  2. 2.Bialystok University of TechnologyBialystokPoland
  3. 3.University of BialystokBialystokPoland
  4. 4.Mathematics Department, Faculty of ScienceAleppo UniversityAleppoSyria

Personalised recommendations