EmoChildRu: Emotional Child Russian Speech Corpus
We present the first child emotional speech corpus in Russian, called “EmoChildRu”, which contains audio materials of 3–7 year old kids. The database includes over 20 K recordings (approx. 30 h), collected from 100 children. Recordings were carried out in three controlled settings by creating different emotional states for children: playing with a standard set of toys; repetition of words from a toy-parrot in a game store setting; watching a cartoon and retelling of the story, respectively. This corpus is designed to study the reflection of the emotional state in the characteristics of voice and speech and for studies of the formation of emotional states in ontogenesis. A portion of the corpus is annotated for three emotional states (discomfort, neutral, comfort). Additional data include brain activity measurements (original EEG, evoked potentials records), the results of the adult listeners analysis of child speech, questionnaires, and description of dialogues. The paper reports two child emotional speech analysis experiments on the corpus: by adult listeners (humans) and by an automatic classifier (machine), respectively. Automatic classification results are very similar to human perception, although the accuracy is below 55 % for both, showing the difficulty of child emotion recognition from speech under naturalistic conditions.
KeywordsEmotional child speech Perceptual analysis Spectrographic analysis Emotional states Computational paralinguistics
This study is financially supported by the Russian Foundation for Humanities (project # 13-06-00041a), the Russian Foundation for Basic Research (projects # 13-06-00281a, 15-06-07852a, and 15-07-04415a), the Council for grants of the President of Russia (project # MD-3035.2015.8) and by the Government of Russia (grant No. 074-U01).
- 1.Batliner, A., Blomberg, M., D’Arcy, S., Elenius, D., Giuliani, D., Gerosa, M., Hacker, C., Russell, M.J., Steidl, S., Wong, M.: The pf\_star children’s speech corpus. In: INTERSPEECH, pp. 2761–2764 (2005)Google Scholar
- 2.Eyben, F., Wöllmer, M., Schuller, B.: Opensmile: the munich versatile and fast open-source audio feature extractor. In: Proceedings of the International Conference on Multimedia, pp. 1459–1462. ACM (2010)Google Scholar
- 4.Lyakso, E., Frolova, O., Grigoriev, A.: Acoustic characteristics of vowels in 6 and 7 years old russian children. In: Proceeding International Conference INTERSPEECH, pp. 1739–1742 (2009)Google Scholar
- 5.Lyakso, E.: Study reflects the voice of emotional states: comparative analysis chimpanzee, human infants and adults. In: Proceeding XVI European Conference on Development Psychology ECDP-2013 (2013)Google Scholar
- 6.Lyakso, E., Grigorev, A., Kurazova, A., Ogorodnikova, E.: “INFANT. MAVS” - multimedia model for infants cognitive and emotional development study. In: Ronzhin, A., Potapova, R., Delic, V. (eds.) SPECOM 2014. LNCS, vol. 8773, pp. 284–291. Springer, Heidelberg (2014) Google Scholar
- 7.Lyakso, E.E., Frolova, O.V., Kurazhova, A.V., Gaikova, J.S.: Russian infants and children’s sounds and speech corpuses for language acquisition studies. In: Proceeding International Conference INTERSPEECH, pp. 1878–1881 (2010)Google Scholar
- 8.Platt, J., et al.: Fast training of support vector machines using sequential minimal optimization. Advances in kernel methods: support vector learning 3 (1999)Google Scholar
- 10.Schuller, B., et al.: The interspeech 2010 paralinguistic challenge. In: INTERSPEECH, pp. 2794–2797 (2010)Google Scholar
- 11.Schuller, B., et al.: The interspeech 2013 computational paralinguistics challenge: social signals, conflict, emotion, autism (2013)Google Scholar
- 12.Schuller, B., Steidl, S., Batliner, A.: The interspeech 2009 emotion challenge. INTERSPEECH 2009, 312–315 (2009)Google Scholar