Micro-Expression Recognition for Detecting Human Emotional Changes

  • Kazuhiko SumiEmail author
  • Tomomi Ueda
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9733)


We propose a method estimating human emotional state in communication from four micro-expressions; mouth motion, head pose, eye sight direction, and blinking interval. Those micro-expressions are picked up by a questionnaire survey of human observers watching on video recorded human conversation. Then we implemented a recognition system for those micro-expressions. We detect facial parts from a RGB-Depth camera, measure those four expressions. Then we apply decision-tree style classifier to detect some emotional state and state changes. In our experiment, we gathered 30 videos of human communicating with his/her friend. Then we trained and validated our algorithm with two-fold cross-validation. We compared the classifier output with human examiners’ observation and confirmed over 70 % precision.


Facial Expression Face Image Emotion Recognition Facial Expression Recognition Facial Part 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Ekman, P., Friesen, M.V.: The Facial Action Coding System: A Technique for The Measurement of Facial Movement. Consulting Psychologist, Palo Alto (1978)Google Scholar
  2. 2.
    Mase, K.: Recognition of facial expression from optical flow. IEICE Trans. E74(10), 3474–3483 (1991)Google Scholar
  3. 3.
    Black, M., J., Yacoob, Y.: Tracking and recognizing rigid and non-rigid facial motions using local parametric models of image motion. In: International Conference on Computer Vision, pp. 374–381 (1995)Google Scholar
  4. 4.
    Essa, I., Pentland, A.: Coding, analysis, interpretation and recognition of facial expressions. IEEE Trans. PAMI 19(7), 757–763 (1997)CrossRefGoogle Scholar
  5. 5.
    Donato, G., Bartlett, M.S., Hager, J.C., Ekman, P., Sejnowski, T.J.: Classifying facial actions. IEEE Trans. PAMI 21(10), 974–989 (1999)CrossRefGoogle Scholar
  6. 6.
    Tian, Y., Kanade, T., Cohn, J.F.: Recognizing action units for facial expression analysis. IEEE Trans. PAMI 23(2), 1–19 (2001)CrossRefGoogle Scholar
  7. 7.
    Bartlett, M.S., Littlewort, G., Frank, M.G., Lainscsek, C., Fasel, I., Movellan, J.: Recognizing facial expression: machine learning and application to spontaneous behavior. In: IEEE Conference on CVPR, pp. 568–573 (2005)Google Scholar
  8. 8.
    Chang, Y., Hu, C., Feris, R., Turk, M.: Manifold based analysis of facial expression. J. Image Vis. Comput. 24(6), 605–614 (2006)CrossRefGoogle Scholar
  9. 9.
    Pantic, M., Patras, I.: Dynamics of facial expression: recognition of facial actionsnand their temporal segments from face profile image sequence. IEEE Trans. SMCB 36(2), 433–449 (2006)Google Scholar
  10. 10.
    Asteriadis, S., Tzouveli, P., Karpouzis, K., Kollias, S.: Estimation of behavioral user state based on eye gaze and head pose – application in an e-learning environment. Multimedia Tools Appl. 41(3), 469–493 (2008)CrossRefGoogle Scholar
  11. 11.
    Gunes, H., Piccardi, M.: Automatic temporal segment detection and affect recognition from face and body display. IEEE Trans. SMC. Part B, Cybern. 39(1), 64–84 (2009)CrossRefGoogle Scholar
  12. 12.
    Gunes, H., Pantic, M.: Dimensional emotion prediction from spontaneous head gestures for interaction with sensitive artificial listeners. In: Safonova, A. (ed.) IVA 2010. LNCS, vol. 6356, pp. 371–377. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  13. 13.
    Bartlett, M.S., Whitehill, J.: Automated facial expression measurement: recent applications to basic research in human behavior, learning, and education. In: Calder, A., et al. (eds.) Handbook of Face Perception. Oxford University Press, New York (2010)Google Scholar
  14. 14.
    Tian, Y., Kanade, T., Cohn, J.F.: Facial expression recognition. In: Li, S.Z., Jain, A.K. (eds.) Handbook of Face Recognition, pp. 487–520. Springer-Verlag, Berlin (2011). Chap. 11CrossRefGoogle Scholar
  15. 15.
    Lemaire, P., Ardabilian, M., Chen, L., Daoudi, M.: Fully automatic 3D facial expression recognition using differential mean curvature maps and histograms of oriented gradients. In: International Conference on Automatic Face and Gesture Recognition, pp. 1–7 (2013)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Aoyama Gakuin UniversitySagamiharaJapan

Personalised recommendations