Skip to main content
Log in

Recognizing multiple emotion from ambiguous facial expressions on mobile platforms

  • Focus
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

Extracting and understanding of emotion is of high importance for the interaction between human and machine communication systems. The most expressive way to display the human’s emotion is through facial expression analysis. This paper proposes a multiple emotion recognition system that can recognize combinations of up to a maximum of three different emotions using an active appearance model (AAM), the proposed classification standard, and a k-nearest neighbor (k-NN) classifier in mobile environments. AAM can take the expression of variations that are calculated by the proposed classification standard according to changes in human expressions in real time. The proposed k-NN can classify basic emotions (normal, happy, sad, angry, surprise) as well as more ambiguous emotions by combining the basic emotions in real time, and each recognized emotion that can be subdivided has strength. Whereas most previous methods of emotion recognition recognize various kind of a single emotion, this paper recognizes various emotions with a combination of the five basic emotions. To be easily understood, the recognized result is presented in three ways on a mobile camera screen. The result of the experiment was an average 85 % recognition rate and a 40 % performance showed optimized emotions. The implemented system can be represented by one of the example for augmented reality on displaying combination of real face video and virtual animation with user’s avatar.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  • Abboud B, Davoine F, Dang M (2004) Facial expression recognition and synthesis based on an appearance model. Signal Process Image Commun 19(8):723–740

    Article  Google Scholar 

  • Black MJ, Yacoob Y (2004) Recognizing facial expressions in image sequences using local parameterized models of image motion. Int J Comput Vis 25(1):23–48

    Article  Google Scholar 

  • Chen C-W, Wang C-C (2008) 3D active appearance model for aligning faces in 2D images. IEEE/RSJ international conference on intelligent robots and systems, pp 22–26

  • Cheon Y, Kim D (2008) A natural facial expression recognition using differential-AAM and k-NNS. Pattren Recognit 42(7):1340–1350

    Article  MATH  Google Scholar 

  • Dailey MN, Garrison W (2002) Cottrell, curtis padgett, and ralph adolphs, EMPATH: a neural network that categorizes facial expressions. J Cogn Neurosc 14:1158–1173

    Article  Google Scholar 

  • Edwards GJ, Taylor CJ, Cootes TF (1988) Interpreting face images using active appearance models. IEEE international conference on automatic face and gesture recognition

  • Ju MH, Kang H-B (2011) 3D face fitting method based on 2D active appearance models. IEEE international symposium on multimedia, pp 7–12

  • Jung SU, Kim DH (2006) New rectangle feature type selection for real-time facial expression recognition. J Control Automat Syst Eng 13(2):130–137

    MathSciNet  Google Scholar 

  • Lee, Y-H, Han W, Kim Y, Kim B (2014) Facial feature extraction using an active appearance model on the iPhone. In: International conference on innovative mobile and internet services in ubiquitous computing, pp 196–200

  • Loannou SV, Raouzaiou AT, Tzouvaras VA, Mailis TP, Karpouzis KC, Kollias SD (2005) Emotion recognition through facial expression analysis based on a neurofuzzy network. Neural Netw 18:423–435

  • Martin P (2008) Active appearance models for facial expression recognition and monocular head pose estimation, Master Thesis, Department of Electrical Engineering and Computer Science at University of Coimbra

  • Martins PAD (2008) Active appearance models for facial expression recognition and monocular head pose estimation, Master Thesis, University of Coimbra

  • Navarathna R, Sridharan S, Lucey S (2011) Fourier active appearance models. IEEE international conference on computer vision (ICCV), pp 1919–1926

  • Padgett C, Cottrell GW (1997) Representing face images for emotion classification. Proc Conf Adv Neural Inf Proc Syst 9:894–900

    Google Scholar 

  • Penver P, Atick J (1996) Local feature analysis : a general statistical theory for object representation. Netw Comput Neural Syst 7:477–500

    Article  MATH  Google Scholar 

  • Schlegel K, Grandjean D, Scherer KR (2014) Introducing the geneva emotion recognition test-an example of rasch-based test development, psychological assessment, 26(2), pp 666–672. http://www.affective-sciences.org/GERT/

  • Teijeiro-Mosquera L, Alba-Castro JL (2011) Performance of active appearance model-based pose-robust face recognition. IET Comput Vis 5(6):348–357

    Article  MathSciNet  Google Scholar 

  • Wu X, Kumar V, Quinlan JR, Ghosh J, Yang Q, Motoda H, McLachlan GJ, Ng A, Liu B, Yu PS, Zhou Z-H, Steinbach M, Hand DJ, Steinberg D (2008) Top 10 algorithms in data mining. Knowl Inf Syst 14(1):11–37

    Article  Google Scholar 

  • Xue G, Youwei Z (2006) Facial expression recognition based on the difference of statistical features. Int Conf Signal Process 3:16–20

    Google Scholar 

  • Yoon H, Hahn H (2009) Real-time recognition system of facial expressions using principal component of Gabor-wavelet features. J korean Inst Intell Syst 19(6):821–827

    Article  Google Scholar 

  • Zeng, Z, Pantic, M, Roisman GI, Huang S (2009) A survey of affect recognition methods: audio, visual, and spontaneous expressions. IEEE Trans Pattern Anal Mach Intell 31(1):39–58

Download references

Acknowledgments

This work was supported by the ICT R&D program of MSIP/IITP. [2014(I5501-14-1007), 3D Smart Media/ Augmented Reality Technology, KCJR Cooperation International Standardization].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Youngseop Kim.

Additional information

Communicated by A. Jara, M. R. Ogiela, I. You and F.-Y. Leu.

A short previous version of this paper was presented at 2014 8th International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing Lee et al. (2014).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lee, YH., Han, W. & Kim, Y. Recognizing multiple emotion from ambiguous facial expressions on mobile platforms. Soft Comput 20, 1811–1819 (2016). https://doi.org/10.1007/s00500-015-1680-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-015-1680-y

Keywords

Navigation