Skip to main content
Log in

An efficient approach for facial action unit intensity detection using distance metric learning based on cosine similarity

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Emotions of human beings are largely represented by facial expressions. Facial expressions, simple as well as complex, are well decoded by facial action units. Any facial expression can be detected and analyzed if facial action units are decoded well. In the presented work, an attempt has been made to detect facial action unit intensity by mapping the features based on their cosine similarity. Distance metric learning based on cosine similarity maps the data by learning a metric that measures orientation rather than magnitude. The motivation behind using cosine similarity is that change in facial expressions can be better represented by changes in orientation as compared to the magnitude. The features are applied to support vector machine for classification of various intensities of action units. Experimental results on the popularly accepted database such as DISFA database and UNBC McMaster shoulder pain database confirm the efficacy of the proposed approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Bartlett, M.S., Littlewort, G.C., Frank, M.G., Lainscsek, C., Fasel, I.R., Movellan, J.R.: Automatic recognition of facial actions in spontaneous expressions. J. Multimed. 1(6), 22–35 (2006)

    Google Scholar 

  2. Bellet, A., Habrard, A., Sebban, M.: A survey on metric learning for feature vectors and structured data. CoRR abs/1306.6709 (2013). http://arxiv.org/abs/1306.6709

  3. Bingol, D., Çelik, T., Omlin, C.W., Vadapalli, H.B.: Facial action unit intensity estimation using rotation invariant features and regression analysis. In: 2014 IEEE International Conference on Image Processing, ICIP 2014, Paris, France, pp. 1381–1385 (2014).https://doi.org/10.1109/ICIP.2014.7025276

  4. Bookstein, F.L.: Principal warps: thin-plate splines and the decomposition of deformations. IEEE Trans. Pattern Anal. Mach. Intell. 11(6), 567–585 (1989)

    Article  MATH  Google Scholar 

  5. Buciu, I., Kotropoulos, C., Pitas, I.: Comparison of ICA approaches for facial expression recognition. Signal, Image Video Process. 3(4), 345 (2008). https://doi.org/10.1007/s11760-008-0074-3

    Article  MATH  Google Scholar 

  6. Chen, J., Takiguchi, T.: Ariki Y (2017) Rotation-reversal invariant hog cascade for facial expression recognition. Signal, Image Video Process. 11(8), 1485–1492 (2017). https://doi.org/10.1007/s11760-017-1111-x

    Article  Google Scholar 

  7. Ekman, P., Friesen, W.V.: Measuring facial movement. Environ. Psychol. Nonverbal Behav. 1(1), 56–75 (1976)

    Article  Google Scholar 

  8. Hammal, Z., Kunz, M.: Pain monitoring: a dynamic and context-sensitive system. Pattern Recognit. 45(4), 1265–1280 (2012)

    Article  Google Scholar 

  9. Kaltwang, S., Rudovic, O., Pantic, M.: Continuous pain intensity estimation from facial expressions. In: Advances in Visual Computing, pp. 368–377. Springer, Berlin (2012)

  10. Lajevardi, S.M.: Structural similarity classifier for facial expression recognition. Signal, Image Video Process. 8(6), 1103–1110 (2014). https://doi.org/10.1007/s11760-014-0639-2

    Article  Google Scholar 

  11. Lajevardi, S.M., Hussain, Z.M.: Automatic facial expression recognition: feature extraction and selection. Signal, Image Video Process. 6(1), 159–169 (2012). https://doi.org/10.1007/s11760-010-0177-5

    Article  Google Scholar 

  12. Li, Y., Chen, J., Zhao, Y., Ji, Q.: Data-free prior model for facial action unit recognition. T. Affect. Comput. 4(2), 127–141 (2013). http://dblp.uni-trier.de/db/journals/taffco/taffco4.html#LiCZJ13

  13. Li, Y., Mavadati, S.M., Mahoor, M.H., Zhao, Y., Ji, Q.: Measuring the intensity of spontaneous facial action units with dynamic bayesian network. Pattern Recognit. (0), (2015). https://doi.org/10.1016/j.patcog.2015.04.022. http://www.sciencedirect.com/science/article/pii/S0031320315001612

  14. Lucey, S., Ashraf, A.B., Cohn, J.F., Investigating spontaneous facial action recognition through AAM representations of the face. In: Delac,K., Grgic, M. (eds.), Face Recognition. I-Tech Education and Publishing, pp. 275–286 (2007)

  15. Mahoor, M., Cadavid, S., Messinger, D., Cohn, J.: A framework for automated measurement of the intensity of non-posed facial action units. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2009. CVPR Workshops 2009, pp. 74–80 (2009).https://doi.org/10.1109/CVPRW.2009.5204259

  16. Mavadati, S., Mahoor, M., Bartlett, K., Trinh, P., Cohn, J.: Disfa: a spontaneous facial action intensity database. IEEE Trans. Affect. Comput. 4(2), 151–160 (2013). https://doi.org/10.1109/T-AFFC.2013.4

    Article  Google Scholar 

  17. McCall, J.C., Trivedi, M.M.: Pose invariant affect analysis using thin-plate splines. In: Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004, vol. 3, pp. 958–964. IEEE, (2004)

  18. Mehrabian, A.: Silent Messages: Implicit Communications of Emotions and Attitudes. Wadsworth Wadsworth Publishing Company, Belmont, California (1981)

    Google Scholar 

  19. Mlakar, U., Potočnik, B.: Automated facial expression recognition based on histograms of oriented gradient feature vector differences. Signal, Image Video Process. 9(1), 245–253 (2015). https://doi.org/10.1007/s11760-015-0810-4

    Article  Google Scholar 

  20. Ojala T., Pietikäinen M., Mäenpää T.: (2000) Gray scale and rotation invariant texture classification with local binary patterns. In: Computer Vision - ECCV 2000. ECCV 2000. Lecture Notes in Computer Science, vol. 1842, pp. 404–420 Springer, Berlin, Heidelberg

  21. Pantic, M., Patras, I.: Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences. IEEE Trans. Syst. Man. Cybern. Part B: Cybern. 36(2), 433–449 (2006). https://doi.org/10.1109/TSMCB.2005.859075

    Article  Google Scholar 

  22. Pantic, M., Rothkrantz, L.J.M.: An expert system for recognition of facial actions and their intensity. In: H.A. Kautz, B.W. Porter (eds.) AAAI/IAAI, pp. 1026–1033. AAAI Press, Palo Alto, Massachusetts (2000). http://dblp.uni-trier.de/db/conf/aaai/aaai2000.html#PanticR00

  23. Rathee, N., Ganotra, D.: A novel approach for pain intensity detection based on facial feature deformations. J. Vis. Commun. Image Represent. 33, 247 – 254 (2015). https://doi.org/10.1016/j.jvcir.2015.09.007. http://www.sciencedirect.com/science/article/pii/S1047320315001686

  24. Rudovic, O., Pavlovic, V., Pantic, M.: Context-sensitive dynamic ordinal regression for intensity estimation of facial action units. Pattern Anal. Mach. Intell. IEEE Trans. 37(5), 944–958 (2015). https://doi.org/10.1109/TPAMI.2014.2356192

    Article  Google Scholar 

  25. Sandbach, G., Zafeiriou, S., Pantic, M.: Binary pattern analysis for 3D facial action unit detection (2012)

  26. Savran, A., Sankur, B., Bilge, M.T.: Regression-based intensity estimation of facial action units. 3D Facial Behaviour Analysis and Understanding Image Vision Computing 30(10), 774–784 (2012). https://doi.org/10.1016/j.imavis.2011.11.008. http://www.sciencedirect.com/science/article/pii/S0262885611001326

  27. Tian, L.Y., Kanade, T., Cohn, J.F.: Evaluation of gabor-wavelet-based facial action unit recognition in image sequences of increasing complexity. In: Proceedings of the IEEE Conference on Automatic Face and Gesture Recognition, pp. 229–234. Springer, Berlin (2002)

  28. Tong, Y., Liao, W., Ji, Q.: Facial action unit recognition by exploiting their dynamic and semantic relationships. Pattern Anal. Mach. Intell. IEEE Trans. 29(10), 1683–1699 (2007)

    Article  Google Scholar 

  29. Yan, H.: Kinship verification using neighborhood repulsed correlation metric learning. Regularization techniques for high-dimensional data analysis. Image Vision Comput. 60((Supplement C)), 91–97 (2017). https://doi.org/10.1016/j.imavis.2016.08.009

    Article  Google Scholar 

  30. Yang, L., Jin, R.: Distance metric learning: a comprehensive survey. Department of Computer Science and Engineering, Michigan State University (2006)

  31. Yurtkan, K., Demirel, H.: Entropy-based feature selection for improved 3d facial expression recognition. Signal, Image Video Process. 8(2), 267–277 (2014). https://doi.org/10.1007/s11760-013-0543-1

    Article  Google Scholar 

  32. Zhang, Y., Zhang, L., Hossain, M.: Adaptive 3d facial action intensity estimation and emotion recognition. Expert Syst. Appl. 42(3), 1446–1464 (2015). https://doi.org/10.1016/j.eswa.2014.08.042. http://www.sciencedirect.com/science/article/pii/S0957417414005247

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Neeru Rathee.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rathee, N., Ganotra, D. An efficient approach for facial action unit intensity detection using distance metric learning based on cosine similarity. SIViP 12, 1141–1148 (2018). https://doi.org/10.1007/s11760-018-1255-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-018-1255-3

Keywords

Navigation