Audio-Based Emotion Recognition from Natural Conversations Based on Co-Occurrence Matrix and Frequency Domain Energy Distribution Features
- Cite this paper as:
- Sayedelahl A., Fewzee P., Kamel M.S., Karray F. (2011) Audio-Based Emotion Recognition from Natural Conversations Based on Co-Occurrence Matrix and Frequency Domain Energy Distribution Features. In: D’Mello S., Graesser A., Schuller B., Martin JC. (eds) Affective Computing and Intelligent Interaction. Lecture Notes in Computer Science, vol 6975. Springer, Berlin, Heidelberg
Emotion recognition from natural speech is a very challenging problem. The audio sub-challenge represents an initial step towards building an efficient audio-visual based emotion recognition system that can detect emotions for real life applications (i.e. human-machine interaction and/or communication). The SEMAINE database, which consists of emotionally colored conversations, is used as the benchmark database. This paper presents our emotion recognition system from speech information in terms of positive/negative valence, and high and low arousal, expectancy and power. We introduce a new set of features including Co-Occurrence matrix based features as well as frequency domain energy distribution based features. Comparisons between well-known prosodic and spectral features and the new features are presented. Classification using the proposed features has shown promising results compared to the classical features on both the development and test data sets.
KeywordsSpeech Emotion Recognition Co-Occurrence Matrix Frequency Domain Energy Distribution
Unable to display preview. Download preview PDF.