Efficient Detection of Consecutive Facial Expression Apices Using Biologically Based Log-Normal Filters
The automatic extraction of the most relevant information in a video sequence made of continuous affective states is an important challenge for efficient human-machine interaction systems. In this paper a method is proposed to solve this problem based on two steps: first, the automatic segmentation of consecutive emotional segments based on the response of a set of Log-Normal filters; secondly, the automatic detection of the facial expression apices based on the estimation of the global face energy inside each emotional segment independently of the undergoing facial expression. The proposed method is fully automatic and independent from any reference image such as the neutral at the beginning of the sequence. The proposed method is the first contribution for the summary of the most important affective information present in a video sequence independently of the undergoing facial expressions. The robustness and efficiency of the proposed method to different data acquisition and facial differences has been evaluated on a large set of data (157 video sequences) taken from two benchmark databases (Hammal-Caplier and MMI databases) [1, 2] and from 20 recorded video sequences of multiple facial expressions (between three to seven facial expressions per sequence) in order to include more challenging image data in which expressions are not neatly packaged in neutral-expression-neutral.
KeywordsFacial expressions Apices Video affect summary Log-Normal filters
Unable to display preview. Download preview PDF.