Mixed Feelings About Using Phoneme-Level Models in Emotion Recognition
This study deals with the application of MFCC based models for both the recognition of emotional speech and the recognition of emotions in speech. More specifically it investigates the performance of phone-level models. First, results from performing forced alignment for the phonetic segmentation on GEMEP, a novel multimodal corpus of acted emotional utterances are presented, then the newly acquired segmentations are used for experiments with emotion recognition.
- 1.Bänziger, T., Pirker, H., Scherer, K.: GEMEP - GEneva Multimodal Emotion Portrayals: A corpus for the study of multimodal emotional expressions. In: LREC 2006 Workshop Corpora for Research on Emotion and Affect, Genoa, Italy, pp. 15–19 (2006)Google Scholar
- 2.Young, S., Evermann, G., Kershaw, D., Moore, G., Odell, J., Ollason, D., Povey, D., Valtchev, V., Woodland, P.: The HTK Book (version 3.4). Cambridge University Engineering Department, Cambridge UK (2006)Google Scholar