Audio-Based Emotion Recognition from Natural Conversations Based on Co-Occurrence Matrix and Frequency Domain Energy Distribution Features

* Final gross prices may vary according to local VAT.

Get Access


Emotion recognition from natural speech is a very challenging problem. The audio sub-challenge represents an initial step towards building an efficient audio-visual based emotion recognition system that can detect emotions for real life applications (i.e. human-machine interaction and/or communication). The SEMAINE database, which consists of emotionally colored conversations, is used as the benchmark database. This paper presents our emotion recognition system from speech information in terms of positive/negative valence, and high and low arousal, expectancy and power. We introduce a new set of features including Co-Occurrence matrix based features as well as frequency domain energy distribution based features. Comparisons between well-known prosodic and spectral features and the new features are presented. Classification using the proposed features has shown promising results compared to the classical features on both the development and test data sets.