Analysis of Temporal Features for Interaction Quality Estimation

  • Stefan Ultes
  • Alexander Schmitt
  • Wolfgang Minker
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 427)


Many different approaches for Interaction Quality (IQ) estimating of Spoken Dialogue Systems have been investigated. While dialogues clearly have a sequential nature, statistical classification approaches designed for sequential problems do not seem to work better on automatic IQ estimation than static approaches, i.e., regarding each turn as being independent of the corresponding dialogue. Hence, we analyse this effect by investigating the subset of temporal features used as input for statistical classification of IQ. We extend the set of temporal features to contain the system and the user view. We determine the contribution of each feature sub-group showing that temporal features contribute most to the classification performance. Furthermore, for the feature sub-group modeling the temporal effects with a window, we modify the window size increasing the overall performance significantly by \(+\)15.69 % achieving an Unweighted Average Recall of 0.562.


Spoken dialogue system evaluation User satisfaction Support vector machine classification 


  1. 1.
    Ultes, S., Heinroth, T., Schmitt, A., Minker, W.: A theoretical framework for a user-centered spoken dialog manager. In: López-Cózar, R., Kobayashi, T. (eds.) Proceedings of the Paralinguistic Information and its Integration in Spoken Dialogue Systems Workshop, pp. 241–246. Springer, New York, NY (2011)Google Scholar
  2. 2.
    Ultes, S., Dikme, H., Minker, W.: Dialogue management for user-centered adaptive dialogue. In: Rudnicky, A.I., Raux, A., Lane, I., Misu, T. (eds.) Situated Dialog in Speech-Based Human-Computer Interaction, pp. 51–61. Springer International Publishing, Cham (2016)CrossRefGoogle Scholar
  3. 3.
    Ultes, S., Dikme, H., Minker, W.: First insight into quality-adaptive dialogue. In: Proceedings of the International Conference on Language Resources and Evaluation (LREC), pp. 246–251 (2014)Google Scholar
  4. 4.
    Ultes, S., Platero Sánchez, M.J., Schmitt, A., Minker, W.: Analysis of an extended interaction quality corpus. In: Lee, G.G., Kim, H.K., Jeong, M., Kim, J.H. (eds.) Natural Language Dialog Systems and Intelligent Assistants, pp. 41–52. Springer International Publishing (2015)Google Scholar
  5. 5.
    Schmitt, A., Ultes, S.: Interaction quality: assessing the quality of ongoing spoken dialog interaction by experts—and how it relates to user satisfaction. Speech Commun. 74, 12–36 (2015).
  6. 6.
    Ultes, S., Minker, W.: Interaction quality estimation in spoken dialogue systems using hybrid-HMMs. In: Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pp. 208–217. Association for Computational Linguistics (2014).
  7. 7.
    Rabiner, L.R.: A tutorial on hidden Markov models and selected applications in speech recognition. Morgan Kaufmann Publishers Inc., San Francisco (1989)Google Scholar
  8. 8.
    Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer, New York (1995)CrossRefzbMATHGoogle Scholar
  9. 9.
    Lafferty, J.D., McCallum, A., Pereira, F.C.N.: Conditional random fields: probabilistic models for segmenting and labeling sequence data. In: Proceedings of the Eighteenth International Conference on Machine Learning, pp. 282–289. Morgan Kaufmann Publishers Inc. (2001)Google Scholar
  10. 10.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  11. 11.
    Walker, M., Litman, D.J., Kamm, C.A., Abella, A.: PARADISE: a framework for evaluating spoken dialogue agents. In: Proceedings of the Eighth Conference on European Chapter of the Association for Computational Linguistics (EACL), pp. 271–280. Association for Computational Linguistics, Morristown, NJ, USA (1997)Google Scholar
  12. 12.
    Ultes, S., Schmitt, A., Minker, W.: Towards quality-adaptive spoken dialogue management. In: Proceedings of the NAACL-HLT Workshop on Future directions and needs in the Spoken Dialog Community: Tools and Data (SDCTD 2012), pp. 49–52. Association for Computational Linguistics, Montréal, Canada (2012).
  13. 13.
    Hara, S., Kitaoka, N., Takeda, K.: Estimation method of user satisfaction using n-gram-based dialog history model for spoken dialog system. In: Calzolari, N., Choukri, K., Maegaard, B., Mariani, J., Odijk, J., Piperidis, S., Rosner, M., Tapias, D. (eds.) Proceedings of the Seventh Conference on International Language Resources and Evaluation (LREC’10). European Language Resources Association (ELRA), Valletta, Malta (2010)Google Scholar
  14. 14.
    Higashinaka, R., Minami, Y., Dohsaka, K., Meguro, T.: Issues in predicting user satisfaction transitions in dialogues: individual differences, evaluation criteria, and prediction models. In: Lee, G., Mariani, J., Minker, W., Nakamura, S. (eds.) Spoken Dialogue Systems for Ambient Environments, Lecture Notes in Computer Science, vol. 6392, pp. 48–60. Springer, Heidelberg (2010). doi: 10.1007/978-3-642-16202-2_5
  15. 15.
    Higashinaka, R., Minami, Y., Dohsaka, K., Meguro, T.: Modeling user satisfaction transitions in dialogues from overall ratings. In: Proceedings of the SIGDIAL 2010 Conference, pp. 18–27. Association for Computational Linguistics, Tokyo, Japan (2010)Google Scholar
  16. 16.
    Engelbrecht, K.P., Gödde, F., Hartard, F., Ketabdar, H., Möller, S.: Modeling user satisfaction with Hidden Markov Model. In: Proceedings of the 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pp. 170–177. Association for Computational Linguistics, Morristown, NJ, USA (2009)Google Scholar
  17. 17.
    Schmitt, A., Schatz, B., Minker, W.: A statistical approach for estimating user satisfaction in spoken human-machine interaction. In: Proceedings of the IEEE Jordan Conference on Applied Electrical Engineering and Computing Technologies (AEECT), pp. 1–6. IEEE, Amman, Jordan (2011)Google Scholar
  18. 18.
    Ultes, S., Minker, W.: Improving interaction quality recognition using error correction. In: Proceedings of the 14th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp. 122–126. Association for Computational Linguistics (2013).
  19. 19.
    Ultes, S., ElChabb, R., Minker, W.: Application and evaluation of a conditioned Hidden Markov Model for estimating interaction quality of spoken dialogue systems. In: Mariani, J., Devillers, L., Garnier-Rizet, M., Rosset, S. (eds.) Natural Interaction with Robots, Knowbots and Smartphones: Putting Spoken Dialog Systems into Practice, pp. 141–150. Springer, New York, NY (2012)Google Scholar
  20. 20.
    Schmitt, A., Schatz, B., Minker, W.: Modeling and predicting quality in spoken human-computer interaction. In: Proceedings of the SIGDIAL 2011 Conference, pp. 173–184. Association for Computational Linguistics, Portland, Oregon, USA (2011)Google Scholar
  21. 21.
    Schmitt, A., Ultes, S., Minker, W.: A parameterized and annotated spoken dialog corpus of the cmu let’s go bus information system. In: International Conference on Language Resources and Evaluation (LREC), pp. 3369–3337 (2012)Google Scholar
  22. 22.
    Ultes, S., Schmitt, A., Minker, W.: On quality ratings for spoken dialogue systems—experts vs. users. In: Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 569–578. Association for Computational Linguistics (2013)Google Scholar
  23. 23.
    Raux, A., Bohus, D., Langner, B., Black, A.W., Eskenazi, M.: Doing research on a deployed spoken dialogue system: one year of let’s go! experience. In: Proceedings of the International Conference on Speech and Language Processing (ICSLP) (2006)Google Scholar
  24. 24.
    Landis, J.R., Koch, G.G.: The measurement of observer agreement for categorical data. Biometrics 33(1), 159–174 (1977)CrossRefzbMATHGoogle Scholar
  25. 25.
    Ultes, S.: User-centred Adaptive Spoken Dialogue Modelling. Dissertation, Ulm University, Ulm (2015)Google Scholar
  26. 26.
    Cohen, J.: A coefficient of agreement for nominal scales. Educ. Psychol. Meas. 20, 37–46 (1960). AprCrossRefGoogle Scholar
  27. 27.
    Cohen, J.: Weighted kappa: nominal scale agreement provision for scaled disagreement or partial credit. Psychol. Bull. 70(4), 213 (1968)CrossRefGoogle Scholar
  28. 28.
    Spearman, C.E.: The proof and measurement of association between two things. Am. J. Psychol. 15, 88–103 (1904)Google Scholar
  29. 29.
    Wilcoxon, F.: Individual comparisons by ranking methods. Biom. Bull. 1(6), 80–83 (1945)CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Singapore 2017

Authors and Affiliations

  • Stefan Ultes
    • 1
  • Alexander Schmitt
    • 2
  • Wolfgang Minker
    • 2
  1. 1.Engineering DepartmentUniversity of CambridgeCambridgeUK
  2. 2.Institute of Communications EngineerigUlm UniversityUlmGermany

Personalised recommendations