International Conference on Speech and Computer

SPECOM 2015: Speech and Computer pp 144-152 | Cite as

EmoChildRu: Emotional Child Russian Speech Corpus

  • Elena Lyakso
  • Olga Frolova
  • Evgeniya Dmitrieva
  • Aleksey Grigorev
  • Heysem Kaya
  • Albert Ali Salah
  • Alexey Karpov
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9319)

Abstract

We present the first child emotional speech corpus in Russian, called “EmoChildRu”, which contains audio materials of 3–7 year old kids. The database includes over 20 K recordings (approx. 30 h), collected from 100 children. Recordings were carried out in three controlled settings by creating different emotional states for children: playing with a standard set of toys; repetition of words from a toy-parrot in a game store setting; watching a cartoon and retelling of the story, respectively. This corpus is designed to study the reflection of the emotional state in the characteristics of voice and speech and for studies of the formation of emotional states in ontogenesis. A portion of the corpus is annotated for three emotional states (discomfort, neutral, comfort). Additional data include brain activity measurements (original EEG, evoked potentials records), the results of the adult listeners analysis of child speech, questionnaires, and description of dialogues. The paper reports two child emotional speech analysis experiments on the corpus: by adult listeners (humans) and by an automatic classifier (machine), respectively. Automatic classification results are very similar to human perception, although the accuracy is below 55 % for both, showing the difficulty of child emotion recognition from speech under naturalistic conditions.

Keywords

Emotional child speech Perceptual analysis Spectrographic analysis Emotional states Computational paralinguistics 

Notes

Acknowledgments

This study is financially supported by the Russian Foundation for Humanities (project # 13-06-00041a), the Russian Foundation for Basic Research (projects # 13-06-00281a, 15-06-07852a, and 15-07-04415a), the Council for grants of the President of Russia (project # MD-3035.2015.8) and by the Government of Russia (grant No. 074-U01).

References

  1. 1.
    Batliner, A., Blomberg, M., D’Arcy, S., Elenius, D., Giuliani, D., Gerosa, M., Hacker, C., Russell, M.J., Steidl, S., Wong, M.: The pf\_star children’s speech corpus. In: INTERSPEECH, pp. 2761–2764 (2005)Google Scholar
  2. 2.
    Eyben, F., Wöllmer, M., Schuller, B.: Opensmile: the munich versatile and fast open-source audio feature extractor. In: Proceedings of the International Conference on Multimedia, pp. 1459–1462. ACM (2010)Google Scholar
  3. 3.
    Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I.H.: The weka data mining software: an update. ACM SIGKDD Explor. Newslett. 11(1), 10–18 (2009)CrossRefGoogle Scholar
  4. 4.
    Lyakso, E., Frolova, O., Grigoriev, A.: Acoustic characteristics of vowels in 6 and 7 years old russian children. In: Proceeding International Conference INTERSPEECH, pp. 1739–1742 (2009)Google Scholar
  5. 5.
    Lyakso, E.: Study reflects the voice of emotional states: comparative analysis chimpanzee, human infants and adults. In: Proceeding XVI European Conference on Development Psychology ECDP-2013 (2013)Google Scholar
  6. 6.
    Lyakso, E., Grigorev, A., Kurazova, A., Ogorodnikova, E.: “INFANT. MAVS” - multimedia model for infants cognitive and emotional development study. In: Ronzhin, A., Potapova, R., Delic, V. (eds.) SPECOM 2014. LNCS, vol. 8773, pp. 284–291. Springer, Heidelberg (2014) Google Scholar
  7. 7.
    Lyakso, E.E., Frolova, O.V., Kurazhova, A.V., Gaikova, J.S.: Russian infants and children’s sounds and speech corpuses for language acquisition studies. In: Proceeding International Conference INTERSPEECH, pp. 1878–1881 (2010)Google Scholar
  8. 8.
    Platt, J., et al.: Fast training of support vector machines using sequential minimal optimization. Advances in kernel methods: support vector learning 3 (1999)Google Scholar
  9. 9.
    Schuller, B., et al.: Cross-corpus acoustic emotion recognition: variances and strategies. IEEE Trans. Affect. Comput. 1(2), 119–131 (2010)CrossRefGoogle Scholar
  10. 10.
    Schuller, B., et al.: The interspeech 2010 paralinguistic challenge. In: INTERSPEECH, pp. 2794–2797 (2010)Google Scholar
  11. 11.
    Schuller, B., et al.: The interspeech 2013 computational paralinguistics challenge: social signals, conflict, emotion, autism (2013)Google Scholar
  12. 12.
    Schuller, B., Steidl, S., Batliner, A.: The interspeech 2009 emotion challenge. INTERSPEECH 2009, 312–315 (2009)Google Scholar
  13. 13.
    Syssau, A., Monnier, C.: Children’s emotional norms for 600 french words. Behavior Res. Methods 41(1), 213–219 (2009)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Elena Lyakso
    • 1
  • Olga Frolova
    • 1
  • Evgeniya Dmitrieva
    • 1
  • Aleksey Grigorev
    • 1
  • Heysem Kaya
    • 2
  • Albert Ali Salah
    • 2
  • Alexey Karpov
    • 3
    • 4
  1. 1.The Child Speech Research GroupSt. Petersburg State UniversitySt. PetersburgRussia
  2. 2.Department of Computer EngineeringBogazici UniversityIstanbulTurkey
  3. 3.St. Petersburg Institute for Informatics and Automation of RASSt. PetersburgRussia
  4. 4.ITMO UniversitySt. PetersburgRussia

Personalised recommendations