Advertisement

Estimation of User’s Willingness to Talk About the Topic: Analysis of Interviews Between Humans

  • Yuya Chiba
  • Akinori Ito
Chapter
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 427)

Abstract

This research tried to estimate the user’s willingness to talk about the topic provided by the dialog system. Dialog management based on the user’s willingness is assumed to improve the satisfaction the user gets from the dialog with the system. We collected interview dialogs between humans to analyze the features for estimation, and found that significant differences of the statistics of \(F_0\) and power of the speech, and the degree of the facial movements by a statistical test. We conducted discrimination experiments by using multi-modal features with SVM, and obtained the best result when we used the audio-visual information. We obtained 80.4 \(\%\) of discrimination ratio under leave-one-out condition and 77.1 \(\%\) discrimination ratio under subject-open condition.

Keywords

User’s willingness to talk Spoken dialog system Multi-modal information 

References

  1. 1.
    Meena, R., Boye, J., Skantze, G., Gustafson, J.: Crowdsourcing street-level geographic information using a spoken dialogue system. In: Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp. 2–11 (2014)Google Scholar
  2. 2.
    Dohsaka, K., Higashinaka, R., Minami, Y., Maeda, E.: Effects of conversational agents on activation of communication in thought-evoking multi-party dialogues. IEICE Trans. Inf. Syst. E97-D(8), 2147–2156 (2014)Google Scholar
  3. 3.
    Tokuhisa, R., Terashima, R.: Relationship between utterances and enthusiasm in non-task-oriented conversational dialogue. In: Proceedings of the 7th SIGdial Workshop on Discourse and Dialogue, pp. 161–167 (2009)Google Scholar
  4. 4.
    Moriya, Y., Tanaka, T., Miyajima, T., Fujita, K.: Estimation of conversational activation level during video chat using turn-taking information. In: Proceedings of the 10th Asia Pacific Conference on Computer Human Interaction, pp. 289–298 (2012)Google Scholar
  5. 5.
    Pantic, M., Rothkrantz, L.J.: Toward an affect-sensitive multimodal human-computer interaction. In: Proceedings of the IEEE 91, 1370–1390 (2003)Google Scholar
  6. 6.
    Schuller, B., Müeller, R., Höernler, B., Höethker, A., Konosu, H., Rigoll, G.: Audiovisual recognition of spontaneous interest within conversations. In: Proceedings of the 9th International Conference on Multimodal Interfaces, pp. 30–37 (2007)Google Scholar
  7. 7.
    Chiba, Y., Nose, T., Ito, A., Ito, M.: User modeling by using bag-of-behaviors for building a dialog system sensitive to the interlocutor’s internal state. In: Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp. 74–78 (2014)Google Scholar
  8. 8.
    Meguro, T., Higashinaka, R., Dohsaka, K., Minami, Y., Isozaki, H.: Analysis of listening-oriented dialogue for building listening agents. In: Proceedings of the 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp. 124–127 (2009)Google Scholar
  9. 9.
    Farnebäck, G.: Image Analysis, Chap. Two-frame Motion Estimation Based on Polynomial Expansion, pp. 363–370. Springer (2003)Google Scholar
  10. 10.
    Saragih, J.M., Lucey, S., Cohn, J.F.: Deformable model fitting by regularized landmark mean-shift. Int. J. Comput. Vis. 91(2), 200–215 (2011)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer Science+Business Media Singapore 2017

Authors and Affiliations

  1. 1.Graduate School of EngineeringTohoku UniversitySendaiJapan

Personalised recommendations