Detecting Depression in Dyadic Conversations with Multimodal Narratives and Visualizations

  • Joshua Y. KimEmail author
  • Greyson Y. Kim
  • Kalina Yacef
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11919)


Conversations contain a wide spectrum of multimodal information that gives us hints about the emotions and moods of the speaker. In this paper, we developed a system that supports humans to analyze conversations. Our main contribution is the identification of appropriate multimodal features and the integration of such features into verbatim conversation transcripts. We demonstrate the ability of our system to take in a wide range of multimodal information and automatically generated a prediction score for the depression state of the individual. Our experiments showed that this approach yielded better performance than the baseline model. Furthermore, the multimodal narrative approach makes it easy to integrate learnings from other disciplines, such as conversational analysis and psychology. Lastly, this interdisciplinary and automated approach is a step towards emulating how practitioners record the course of treatment as well as emulating how conversational analysts have been analyzing conversations by hand.


Multi-disciplinary AI Conversational analysis Visualization Multimodal data 


  1. 1.
    American Psychiatric Association, et al.: Diagnostic and statistical manual of mental disorders (DSM-5®). American Psychiatric Publishing (2013)Google Scholar
  2. 2.
    Jouvent, R., Widlöcher, D., et al.: Speech pause time and the retardation rating scale for depression (ERD): towards a reciprocal validation. J. Affect. Disord. 6, 123–127 (1984)CrossRefGoogle Scholar
  3. 3.
    Stassen, H.H., Kuny, S., Hell, D.: The speech analysis approach to determining onset of improvement under antidepressants. Eur. Neuropsychopharmacol. 8, 303–310 (1998)CrossRefGoogle Scholar
  4. 4.
    Jefferson, G.: Glossary of transcript symbols with an introduction. Pragmat. Beyond New Ser. 125, 13–34 (2004)CrossRefGoogle Scholar
  5. 5.
    Antol, S., et al.: VQA: visual question answering. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2425–2433 (2015)Google Scholar
  6. 6.
    Krishna, R., Hata, K., Ren, F., Fei-Fei, L., Carlos Niebles, J.: Dense-captioning events in videos. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 706–715 (2017)Google Scholar
  7. 7.
    Gratch, J., et al.: The distress analysis interview corpus of human and computer interviews. In: LREC, pp. 3123–3128 (2014)Google Scholar
  8. 8.
    Ringeval, F., et al.: AVEC 2019 workshop and challenge: state-of-mind, detecting depression with AI, and cross-cultural affect recognition. arXiv prepr. arXiv:1907.11510 (2019)
  9. 9.
    Buyukdura, J.S., McClintock, S.M., Croarkin, P.E.: Psychomotor retardation in depression: biological underpinnings, measurement, and treatment. Prog. Neuro-Psychopharmacol. Biol. Psychiatry 35, 395–409 (2011)CrossRefGoogle Scholar
  10. 10.
    Kotov, R., Gamez, W., Schmidt, F., Watson, D.: Linking big personality traits to anxiety, depressive, and substance use disorders: a meta-analysis. Psychol. Bull. 136, 768 (2010)CrossRefGoogle Scholar
  11. 11.
    Du, L., Bakish, D., Ravindran, A.V., Hrdina, P.D.: Does fluoxetine influence major depression by modifying five-factor personality traits? J. Affect. Disord. 71, 235–241 (2002)CrossRefGoogle Scholar
  12. 12.
    Angst, J., Gamma, A., Gastpar, M., Lépine, J.-P., Mendlewicz, J., Tylee, A.: Gender differences in depression. Eur. Arch. Psychiatry Clin. Neurosci. 252, 201–209 (2002)CrossRefGoogle Scholar
  13. 13.
    Nolen-Hoeksema, S., Larson, J., Grayson, C.: Explaining the gender difference in depressive symptoms. J. Pers. Soc. Psychol. 77, 1061 (1999)CrossRefGoogle Scholar
  14. 14.
    Fonzi, L., Matteucci, G., Bersani, G.: Laughter and depression: hypothesis of pathogenic and therapeutic correlation. Riv. Psichiatr. 45, 1–6 (2010)Google Scholar
  15. 15.
    Berenbaum, H., Oltmanns, T.F.: Emotional experience and expression in schizophrenia and depression. J. Abnorm. Psychol. 101, 37 (1992)CrossRefGoogle Scholar
  16. 16.
    Ryokai, K., Durán López, E., Howell, N., Gillick, J., Bamman, D.: Capturing, representing, and interacting with laughter. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, p. 358 (2018)Google Scholar
  17. 17.
    Yang, L., Jiang, D., He, L., Pei, E., Oveneke, M.C., Sahli, H.: Decision tree based depression classification from audio video and language information. In: Proceedings of the 6th International Workshop on Audio/Visual Emotion Challenge, pp. 89–96 (2016)Google Scholar
  18. 18.
    Ekman, P., Friesen, W.V: Manual for the facial action coding system. Consulting Psychologists Press (1978)Google Scholar
  19. 19.
    Cannizzaro, M., Harel, B., Reilly, N., Chappell, P., Snyder, P.J.: Voice acoustical measurement of the severity of major depression. Brain Cogn. 56, 30–35 (2004)CrossRefGoogle Scholar
  20. 20.
    Gardner, M., et al.: AllenNLP: a deep semantic natural language processing platform (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.University of SydneyDarlingtonAustralia
  2. 2.Success Beyond PainSuccessAustralia

Personalised recommendations