Advertisement

Enhancing Speech-Based Depression Detection Through Gender Dependent Vowel-Level Formant Features

  • Nicholas CumminsEmail author
  • Bogdan Vlasenko
  • Hesam Sagha
  • Björn Schuller
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10259)

Abstract

Depression has been consistently linked with alterations in speech motor control characterised by changes in formant dynamics. However, potential differences in the manifestation of depression between male and female speech have not been fully realised or explored. This paper considers speech-based depression classification using gender dependant features and classifiers. Presented key observations reveal gender differences in the effect of depression on vowel-level formant features. Considering this observation, we also show that a small set of hand-crafted gender dependent formant features can outperform acoustic-only based features (on two state-of-the-art acoustic features sets) when performing two-class (depressed and non-depressed) classification.

Keywords

Depression Gender Vowel-level formants Speech motor control Classification 

Notes

Acknowledgements

The research leading to these results has received funding from the European Community’s Seventh Framework Programme through the ERC Starting Grant No. 338164 (iHEARu), and IMI RADAR-CNS under grant agreement No. 115902.

References

  1. 1.
    Cummins, N., Scherer, S., Krajewski, J., Schnieder, S., Epps, J., Quatieri, T.: A review of depression and suicide risk assessment using speech analysis. Speech Commun. 71, 1–49 (2015)CrossRefGoogle Scholar
  2. 2.
    Scherer, S., Lucas, G.M., Gratch, J., Rizzo, A.S., Morency, L.-P.: Self-reported symptoms of depression and PTSD are associated with reduced vowel space in screening interviews. IEEE Trans. Affect. Comput. 7, 59–73 (2016)CrossRefGoogle Scholar
  3. 3.
    Hönig, F., Batliner, A., Nöth, E., Schnieder, S., Krajewski, J.: Automatic modelling of depressed speech: relevant features and relevance of gender. In: Proceedings of INTERSPEECH, pp. 1248–1252. ISCA, Singapore (2014)Google Scholar
  4. 4.
    Alghowinem, S., Goecke, R., Wagner, M., Epps, J., Breakspear, M., Parker, G.: From joyous to clinically depressed: mood detection using spontaneous speech. In: Proceedings of FLAIRS, pp. 141–146. AAAI, Marco Island (2012)Google Scholar
  5. 5.
    Young, M.A., Scheftner, W.A., Fawcett, J., Klerman, G.L.: Gender differences in the clinical features of unipolar major depressive disorder. J. Nerv. Ment. Dis. 178(3), 200–203 (1990)CrossRefGoogle Scholar
  6. 6.
    Kring, A.M., Gordon, A.H.: Sex differences in emotion: expression, experience, and physiology. J. Pers. Soc. Psychol. 74(3), 686–703 (1998)CrossRefGoogle Scholar
  7. 7.
    Vlasenko, B., Prylipko, D., Philippou-Hübner, D., Wendemuth, A.: Vowels formants analysis allows straightforward detection of high arousal acted and spontaneous emotions. In: Proceedings of INTERSPEECH, pp. 1577–1580. ISCA, Florence (2011)Google Scholar
  8. 8.
    Valstar, M., Gratch, J., Schuller, B., Ringeval, F., Lalanne, D., Torres, M.T., Scherer, S., Stratou, G., Cowie, R., Pantic, M.: AVEC 2016 - depression, mood, and emotion recognition workshop and challenge. In: Proceedings 6th ACM International Workshop on Audio/Visual Emotion Challenge, pp. 3–10. ACM, Amsterdam (2016)Google Scholar
  9. 9.
    Boersma, P., Weenink, D.S.: Praat, a system for doing phonetics by computer. Glot Int. 5(9/10), 341–345 (2002)Google Scholar
  10. 10.
    Eyben, F., Scherer, K.R., Schuller, B., Sundberg, J., Andre, E., Busso, C., Devillers, L.Y., Epps, J., Laukka, P., Narayanan, S.S., Truong, K.P.: The Geneva minimalistic acoustic parameter set (GeMAPS) for voice research and affective computing. IEEE Trans. Affect. Comput. 7, 190–202 (2016)CrossRefGoogle Scholar
  11. 11.
    Degottex, G., Kane, J., Drugman, T., Raitio, T., Scherer, S.: COVAREP - a collaborative voice analysis repository for speech technologies. In: Proceedings of ICASSP, pp. 960–964. IEEE, Florence (2014)Google Scholar
  12. 12.
    Rong-En, F., Chang, K.-W., Hsieh, C.-J., Wang, X.-R., Lin, C.-J.: LIBLINEAR: a library for large linear classification. J. Mach. Learn. Res. 9, 1871–1874 (2008)zbMATHGoogle Scholar
  13. 13.
    Scherer, S., Stratou, G., Gratch, J., Morency, L.-P.: Investigating voice quality as a speaker-independent indicator of depression and PTSD. In: Proceedings of INTERSPEECH, pp. 847–851. ISCA, Lyon (2013)Google Scholar
  14. 14.
    Trevino, A., Quatieri, T., Malyska, N.: Phonologically-based biomarkers for major depressive disorder. EURASIP J. Adv. Sig. Proc. 2011, 1–18 (2011)Google Scholar
  15. 15.
    Cummins, N., Sethu, V., Epps, J., Schnieder, S., Krajewski, J.: Analysis of acoustic space variability in speech affected by depression. Speech Commun. 75, 27–49 (2015)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Nicholas Cummins
    • 1
    Email author
  • Bogdan Vlasenko
    • 1
  • Hesam Sagha
    • 1
  • Björn Schuller
    • 1
    • 2
  1. 1.Chair of Complex and Intelligent SystemsUniversity of PassauPassauGermany
  2. 2.Department of ComputingImperial College LondonLondonUK

Personalised recommendations