FE8R - A Universal Method for Face Expression Recognition
This paper proposes a new method for recognition of face expressions, called FE8R. We studied 6 standard expressions: anger, disgust, fear, happiness, sadness, surprise, and additional two: cry and natural. For experimental evaluation samples from MUG Facial Expression Database and color FERET Database were taken, with addition of cry expression. The proposed method is based on the extraction of characteristic objects from images by gradient transformation depending on the coordinates of the minimum and maximum points in each object on the face area. The gradient is ranked in \([-15,+35]\) degrees. Essential objects are studied in two ways: the first way incorporates slant tracking, the second is based on feature encoding using BPCC algorithm with classification by Backpropagation Artificial Neural Networks. The achieved classification rates have reached 95 %. The second method is proved to be fast and producing satisfactory results, as compared to other approaches.
KeywordsFace expression Feature extraction Feature encoding Slant tracking Artificial Neural Networks
This work was supported by grant number S/WI/1/2013 from Bialystok University of Technology and funded from the resources for research by Ministry of Science and Higher Education. The work was also partially supported by NeiTec.
- 1.Gu, H., Su, G., Du, C.: Feature points extraction from face. In: Proceedings of Conference on Image and Vision Computing (2003)Google Scholar
- 6.Gordon, G.: Face recognition based on depth maps and surface curvature. In: SPIE Geometric Methods in Computer Vision, pp. 234–247 (1991)Google Scholar
- 7.Saeed, K.: Object classification and recognition using toeplitz matrices. In: Sołdek, J., Drobiazgiewicz, L. (eds.) Artificial Intelligence and Security in Computing Systems. The Kluwer International Series in Engineering and Computer Science, vol. 752, pp. 163–172. Kluwer Academic Publishers, Massachusetts (2003)CrossRefGoogle Scholar
- 9.Aifanti, N., Papachristou, C., Delopoulos, A.: The MUG facial expression database. In: Proceedings of the 11th International Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS), Desenzano, Italy, April 2010Google Scholar
- 11.Pantic, M.: Facial expression recognition. In: Li, S.Z., Jain, A. (eds.) Encyclopedia of Biometrics, pp. 400–406. Springer, Heidelberg (2009)Google Scholar
- 12.Keltner, D., Ekman, P.: Facial expression of emotion. In: Lewis, M., Haviland-Jones, J.M. (eds.) Handbook of Emotions, pp. 236–249. Guilford Press, New York (2000)Google Scholar
- 14.Lin, K., Cheng, W., Li, J.: Facial expression recognition based on geometric features and geodesic distance. Int. J. Sig. Process. 7(1), 323–330 (2014)Google Scholar
- 17.Youssif, A., Asker, W.A.A.: Automatic facial expression recognition system based on geometric and appearance features. Comput. Inf. Sci. 4(2), 115 (2011). Canadian Center of Science and EducationGoogle Scholar
- 19.Kumbhar, M., Patil, M., Jadhav, A.: Facial expression recognition using gabor wavelet. Int. J. Comput. Appl. 68(23), 0975–8887 (2013)Google Scholar
- 22.Gomathi, V., Ramar, K., Jeevakumar, A.S.: Human facial expression recognition using MANFIS model. Int. J. Electr. Electron. Eng. 3(6), 335–339 (2009)Google Scholar
- 24.Khandait, S.P., Thool, R.C., Khandait, P.D.: Comparative analysis of ANFIS and NN approach for expression recognition using geometry method. J. Adv. Res. Comput. Sci. Softw. Eng. 2(3), 169–174 (2012)Google Scholar
- 25.Albakoor, M., Albakkar, A.A., Dabsh, M., Sukkar, F.: BPCC approach for Arabic letters recognition. In: Arabnia, H.R. (ed.) IPCV, pp. 304–308. CSREA Press (2006)Google Scholar
- 27.Mancas, M., Gosselin, B., Macq, B.: Segmentation using a region growing thresholding. In: Proceedings of the SPIE, vol. 5672, pp. 388–398 (pp. 12–13) (2005)Google Scholar
- 33.Hess, M., Martinez, M.: Facial feature extraction based on the smallest univalue segment assimilating nucleus (SUSAN) algorithm. In: Proceedings of Picture Coding Symposium (2004)Google Scholar
Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 2.5 International License (http://creativecommons.org/licenses/by-nc/2.5/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.