Efficient Detection of Consecutive Facial Expression Apices Using Biologically Based Log-Normal Filters
The automatic extraction of the most relevant information in a video sequence made of continuous affective states is an important challenge for efficient human-machine interaction systems. In this paper a method is proposed to solve this problem based on two steps: first, the automatic segmentation of consecutive emotional segments based on the response of a set of Log-Normal filters; secondly, the automatic detection of the facial expression apices based on the estimation of the global face energy inside each emotional segment independently of the undergoing facial expression. The proposed method is fully automatic and independent from any reference image such as the neutral at the beginning of the sequence. The proposed method is the first contribution for the summary of the most important affective information present in a video sequence independently of the undergoing facial expressions. The robustness and efficiency of the proposed method to different data acquisition and facial differences has been evaluated on a large set of data (157 video sequences) taken from two benchmark databases (Hammal-Caplier and MMI databases) [1, 2] and from 20 recorded video sequences of multiple facial expressions (between three to seven facial expressions per sequence) in order to include more challenging image data in which expressions are not neatly packaged in neutral-expression-neutral.
KeywordsFacial expressions Apices Video affect summary Log-Normal filters
Unable to display preview. Download preview PDF.
- 2.Pantic, M., Valstar, M.F., Rademaker, R., Maat, L.: Web-based database for facial expression analysis. In: Proc. IEEE Int. Conf. ICME 2005, Amsterdam, The Netherlands (July 2005)Google Scholar
- 6.Otsuka, T., Ohya, J.: Recognizing multiple persons’ facial expressions using HMM based on automatic extraction of significant frames from image sequences. In: Proc. IEEE Int. Conf. Image Processing, vol. 2, pp. 546–549 (1997)Google Scholar
- 7.Cohen, I., Cozman, F.G., Sebe, N., Cirelo, M.C., Huang, T.S.: Learning Bayesian network classifiers for facial expression recognition using both labeled and unlabeled data. In: Proc. IEEE CVPR (2003)Google Scholar
- 8.Hammal, Z., Massot, C.: Holistic and Feature-Based Information Towards Dynamic Multi-Expressions Recognition. In: Proc. Int. Conf. VISIGRAPP, Anger, France, (May 17-21, 2010)Google Scholar
- 9.Massot, C., Herault, J.: Model of Frequency Analysis in the Visual Cortex and the Shape from Texture Problem. International Journal of Computer Vision 76(2) (2008)Google Scholar
- 11.Ekman, P., Friesen, W.V.: The facial action coding system (FACS): A technique for the measurement of facial action. Consulting Psychologists Press, Palo Alto (1978)Google Scholar
- 12.Valstar, M.F., Pantic, M.: Induced Disgust, Happiness and Surprise: an Addition to the MMI Facial Expression Database. In: Proc. Int. Conf. LREC, Malta (May 2010)Google Scholar
- 14.Beaudot, W.: Le traitement neuronal de l’information dans la rétine des vertébrés: Un creuset d’idées pour la vision artificielle, Thèse de Doctorat INPG, TIRF, Grenoble France (1994)Google Scholar