Advertisement

When to Elicit Feedback in Dialogue: Towards a Model Based on the Information Needs of Speakers

  • Hendrik Buschmeier
  • Stefan Kopp
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8637)

Abstract

Communicative feedback in dialogue is an important mechanism that helps interlocutors coordinate their interaction. Listeners pro-actively provide feedback when they think that it is important for the speaker to know their mental state, and speakers pro-actively seek listener feedback when they need information on whether a listener perceived, understood or accepted their message. This paper presents first steps towards a model for enabling attentive speaker agents to determine when to elicit feedback based on continuous assessment of their information needs about a user’s listening state.

Keywords

Communicative feedback feedback elicitation dialogue 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Allwood, J., Nivre, J., Ahlsén, E.: On the semantics and pragmatics of linguistic feedback. Journal of Semantics 9, 1–26 (1992)CrossRefGoogle Scholar
  2. 2.
    Bavelas, J.B., Coates, L., Johnson, T.: Listeners as co-narrators. Journal of Personality and Social Psychology 79, 941–952 (2000)CrossRefGoogle Scholar
  3. 3.
    Bavelas, J.B., Coates, L., Johnson, T.: Listener responses as a collaborative process: The role of gaze. Journal of Communication 52, 566–580 (2002)CrossRefGoogle Scholar
  4. 4.
    Buschmeier, H., Baumann, T., Dosch, B., Kopp, S., Schlangen, D.: Combining incremental language generation and incremental speech synthesis for adaptive information presentation. In: Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Seoul, South Korea, pp. 295–303 (2012)Google Scholar
  5. 5.
    Buschmeier, H., Kopp, S.: Towards conversational agents that attend to and adapt to communicative user feedback. In: Vilhjálmsson, H.H., Kopp, S., Marsella, S., Thórisson, K.R. (eds.) IVA 2011. LNCS, vol. 6895, pp. 169–182. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  6. 6.
    Buschmeier, H., Kopp, S.: Using a Bayesian model of the listener to unveil the dialogue information state. In: SemDial 2012: Proceedings of the 16th Workshop on the Semantics and Pragmatics of Dialogue, Paris, France, pp. 12–20 (2012)Google Scholar
  7. 7.
    Clark, H.H.: Using Language. Cambridge University Press, Cambridge (1996)CrossRefGoogle Scholar
  8. 8.
    Clark, H.H., Schaefer, E.F.: Contributing to discourse. Cognitive Science 13, 259–294 (1989)CrossRefGoogle Scholar
  9. 9.
    Gravano, A., Hirschberg, J.: Turn-taking cues in task-oriented dialogue. Computer Speech and Language 25, 601–634 (2011)CrossRefGoogle Scholar
  10. 10.
    Heylen, D.: Head gestures, gaze and the principle of conversational structure. International Journal of Humanoid Robotics 3, 241–267 (2006)CrossRefGoogle Scholar
  11. 11.
    Huang, L., Morency, L.P., Gratch, J.: Parasocial consensus sampling: Combining multiple perspectives to learn virtual human behavior. In: Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems, Toronto, Canada, pp. 1265–1272 (2010)Google Scholar
  12. 12.
    Koiso, H., Horiuchi, Y., Tutiya, S., Ichikawa, A., Den, Y.: An analysis of turn-taking and backchannels on prosodic and syntactic features in Japanese map task dialogs. Language and Speech 41(3-4), 295–321 (1998)Google Scholar
  13. 13.
    de Kok, I., Heylen, D.: Analyzing nonverbal listener responses using parallel recordings of multiple listeners. Cognitive Processing 13, 499–506 (2012)CrossRefGoogle Scholar
  14. 14.
    Kopp, S., Allwood, J., Grammer, K., Ahlsen, E., Stocksmeier, T.: Modeling embodied feedback with virtual humans. In: Wachsmuth, I., Knoblich, G. (eds.) ZiF Research Group International Workshop. LNCS (LNAI), vol. 4930, pp. 18–37. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  15. 15.
    Misu, T., Mizukami, E., Shiga, Y., Kawamoto, S., Kawai, H., Nakamura, S.: Analysis on effects of text-to-speech and avatar agent in evoking users’ spontaneous listener’s reactions. In: Proceedings of the Workshop on Paralinguistic Information and its Integration in Spoken Dialogue Systems, Granada, Spain, pp. 77–89 (2011)Google Scholar
  16. 16.
    Misu, T., Mizukami, E., Shiga, Y., Kawamoto, S., Kawai, H., Nakamura, S.: Toward construction of spoken dialogue system that evokes users’ spontaneous backchannels. In: Proceedings of the 12th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Portland, OR, USA, pp. 259–265 (2011)Google Scholar
  17. 17.
    Morency, L.P., de Kok, I., Gratch, J.: A probabilistic multimodal approach for predicting listener backchannels. Autonomous Agents and Multiagent Systems 20, 70–84 (2010)CrossRefGoogle Scholar
  18. 18.
    Reidsma, D., de Kok, I., Neiberg, D., Pammi, S., van Straalen, B., Truong, K., van Welbergen, H.: Continuous interaction with a virtual human. Journal on Multimodal User Interfaces 4, 97–118 (2011)CrossRefGoogle Scholar
  19. 19.
    Robert, C.P.: Prior feedback: A Bayesian approach to maximum likelihood estimation. Computational Statistics 8, 279–294 (1993)zbMATHMathSciNetGoogle Scholar
  20. 20.
    Schröder, M., Bevacqua, E., Cowie, R., et al.: Building autonomous sensitive artificial listeners. IEEE Transactions on Affective Computing 3, 165–183 (2012)CrossRefGoogle Scholar
  21. 21.
    Ward, N.: Non-lexical conversational sounds in American English. Pragmatics & Cognition 14, 129–182 (2006)CrossRefGoogle Scholar
  22. 22.
    Ward, N., Tsukahara, W.: Prosodic features which cue back-channel responses in English and Japanese. Journal of Pragmatics 38, 1177–1207 (2000)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Hendrik Buschmeier
    • 1
  • Stefan Kopp
    • 1
  1. 1.Sociable Agents Group – CITEC and Faculty of TechnologyBielefeld UniversityBielefeldGermany

Personalised recommendations