Advertisement

Classification of Formal and Informal Dialogues Based on Emotion Recognition Features

  • György Kovács
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11107)

Abstract

Social context is an important part of human communication, hence it is also important for improved human computer interaction. One aspect of social context is the level of formality. Here, motivated by the difference observed between the emotional annotation of formal and informal dialogues in the HuComTech corpus, we introduce a content-free classification scheme based on feature sets designed for emotion recognition. With this method we attain an error rate of \(8.8\%\) in the classification of formal and informal dialogues, which means a relative error rate reduction of more than \(40\%\) compared to earlier results. By combining our proposed method with earlier models, we were able to further reduce the error rate to below \(7\%\).

Keywords

HuComTech SVM MultiBoost Affective computing 

Notes

Acknowledgements

The research reported in the paper was conducted with the support of the Hungarian Scientific Research Fund (OTKA) grant #K116938.

References

  1. 1.
    André, E., Rehm, M., Minker, W., Bühler, D.: Endowing spoken language dialogue systems with emotional intelligence. In: André, E., Dybkjær, L., Minker, W., Heisterkamp, P. (eds.) ADS 2004. LNCS (LNAI), vol. 3068, pp. 178–187. Springer, Heidelberg (2004).  https://doi.org/10.1007/978-3-540-24842-2_17CrossRefGoogle Scholar
  2. 2.
    Benbouzid, D., Busa-Fekete, R., Casagrande, N., Collin, F.D., Kégl, B.: MULTIBOOST: a multi-purpose boosting package. J. Mach. Learn. Res. 13, 549–553 (2012)zbMATHGoogle Scholar
  3. 3.
    Bradley, J., Schapire, R.: FilterBoost: regression and classification on large datasets. In: Advances in Neural Information Processing Systems, vol. 20, pp. 185–192. The MIT Press (2008)Google Scholar
  4. 4.
    Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2, 27:1–27:27 (2011)CrossRefGoogle Scholar
  5. 5.
    Eyben, F., Wöllmer, M., Schuller, B.: The Munich open speech and music interpretation by large space extraction toolkit (2010)Google Scholar
  6. 6.
    Eyben, F., Wöllmer, M., Schuller, B.: openSMILE: the Munich versatile and fast open-source audio feature extractor. In: Proceedings of ACM (MM), pp. 1459–1462 (2010)Google Scholar
  7. 7.
    Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci 55, 119–139 (1997)MathSciNetCrossRefGoogle Scholar
  8. 8.
    G. Escudero, L.M., Rigau, G.: Boosting applied to word sense disambiguation. In: Proceedings of ECML, pp. 129–141 (2000)CrossRefGoogle Scholar
  9. 9.
    Hunyadi, L.: Multimodal human-computer interaction technologies. Theoretical modeling and application in speech processing. Argumentum, pp. 240–260 (2011)Google Scholar
  10. 10.
    Hunyadi, L., Váradi, T., Szekrényes, I.: Language technology tools and resources for the analysis of multimodal communication. In: Proceedings of LT4DH, pp. 117–124. University of Tübingen, Tübingen (2016)Google Scholar
  11. 11.
    Ingram, J.C.L.: Neurolinguistics. Cambridge University Press, Cambridge (2007)CrossRefGoogle Scholar
  12. 12.
    Kristiansen, T.: Attitudes, ideology and awareness. In: Wodak, R., Johnstone, B., Kerswill, P. (eds.) The SAGE Handbook of Sociolinguistics, pp. 265–278. SAGE Publishing, Thousand Oaks (2011)CrossRefGoogle Scholar
  13. 13.
    Labov, W.: The Social Stratification of English in New York City. Cambridge University Press, Cambridge (1996)Google Scholar
  14. 14.
    Pápay, K., Szeghalmy, S., Szekrényes, I.: HuComTech multimodal corpus annotation. Argumentum 7, 330–347 (2011)Google Scholar
  15. 15.
    Schuller, B., Steidl, S., Batliner, A.: The INTERSPEECH 2009 emotion challenge. In: Proceedings of INTERSPEECH, pp. 312–315 (2009)Google Scholar
  16. 16.
    Schuller, B., et al.: The INTERSPEECH 2010 paralinguistic challenge. In: Proceedings of INTERSPEECH, pp. 2822–2825 (2010)Google Scholar
  17. 17.
    Siegert, I., Böck, R., Wendmeuth, A.: Inter-rater reliability for emotion annotation in human-computer interaction: comparison and methodological improvements. Multimodal User Interfaces 8, 17–28 (2014)CrossRefGoogle Scholar
  18. 18.
    Szekrényes, I.: ProsoTool, a method for automatic annotation of fundamental frequency. In: Proceedings of CogInfoCom, pp. 291–296 (2015)Google Scholar
  19. 19.
    Szekrényes, I., Kovács, G.: Classification of formal and informal dialogues based on turn-taking and intonation using deep neural networks. In: Proceedings of SPECOM, pp. 233–243 (2017)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.MTA Research Institute for LinguisticsBudapestHungary
  2. 2.MTA-SZTE Research Group on Artificial IntelligenceSzegedHungary

Personalised recommendations