Attention Based Dialogue Context Selection Model

  • Weidi Xu
  • Yong Ren
  • Ying TanEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11302)


The particular phenomena of Information Overload and Conversational Dependency in multi-turn dialogues have brought massive noise for feature learning in existing deep learning models. To solve the problem, the Attention Based Dialogue Context Selection Model (ABDCS) is proposed in this paper. This model uses attention mechanism to extract the relationship between current response utterance and previous utterances. Qualitative and quantitative analysis show that ABDCS is able to choose the semantically related utterances in its dialogue history as context and be robust against the noise.



This work was supported by the Natural Science Foundation of China (NSFC) under grant no. 61673025 and 61375119 and Supported by Beijing Natural Science Foundation (4162029), and partially supported by National Key Basic Research Development Plan (973 Plan) Project of China under grant no. 2015CB352302.


  1. 1.
    Arora, S., Batra, K., Singh, S.: Dialogue system: a brief review. CoRR abs/1306.4134 (2013).
  2. 2.
    Bird, S.: NLTK: the natural language toolkit. In: Proceedings of the ACL Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics. Association for Computational Linguistics, Philadelphia (2002)Google Scholar
  3. 3.
    Cheng, L., et al.: Recurrent neural network for non-smooth convex optimization problems with application to the identification of genetic regulatory networks. IEEE Trans. Neural Netw. 22(5), 714–726 (2011)CrossRefGoogle Scholar
  4. 4.
    Czajkowski, K., Fitzgerald, S., Foster, I., Kesselman, C.: Grid information services for distributed resource sharing. In: 10th IEEE International Symposium on High Performance Distributed Computing. Proceedings, pp. 181–194. IEEE (2001)Google Scholar
  5. 5.
    Du, W., Poupart, P., Xu, W.: Discovering conversational dependencies between messages in dialogs. CoRR abs/1612.02801 (2016).
  6. 6.
    Elsner, M., Charniak, E.: Disentangling chat with local coherence models. In: The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference, Portland, Oregon, USA, 19–24 June 2011, pp. 1179–1189 (2011).
  7. 7.
    Hansen, L.K., Larsen, J., Fog, T.: Early stop criterion from the bootstrap ensemble. In: 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 1997, Munich, Germany, 21–24 April 1997, pp. 3205–3208 (1997).
  8. 8.
    Klakow, D., Peters, J.: Testing the correlation of word error rate and perplexity. Speech Commun. 38(1–2), 19–28 (2002). Scholar
  9. 9.
    Lowe, R., Pow, N., Serban, I., Pineau, J.: The Ubuntu dialogue corpus: a large dataset for research in unstructured multi-turn dialogue systems. CoRR abs/1506.08909 (2015).
  10. 10.
    Nematzadeh, A., Ciampaglia, G.L., Ahn, Y., Flammini, A.: Information overload in group communication: from conversation to cacophony in the twitch chat. CoRR abs/1610.06497 (2016).
  11. 11.
    Schuster, M., Paliwal, K.K.: Bidirectional recurrent neural networks. IEEE Trans. Signal Process. 45(11), 2673–2681 (1997). Scholar
  12. 12.
    Serban, I.V., Sordoni, A., Bengio, Y., Courville, A.C., Pineau, J.: Building end-to-end dialogue systems using generative hierarchical neural network models. In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, Arizona, USA, 12–17 February 2016, pp. 3776–3784 (2016).
  13. 13.
    Shen, D., Yang, Q., Sun, J., Chen, Z.: Thread detection in dynamic text message streams. In: SIGIR 2006: Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Seattle, Washington, USA, 6–11 August 2006, pp. 35–42 (2006).
  14. 14.
    Sordoni, A., et al.: A neural network approach to context-sensitive generation of conversational responses. In: NAACL HLT 2015, The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Denver, Colorado, USA, May 31–June 5 2015, pp. 196–205 (2015).

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Key Laboratory of Machine Perception (Ministry of Education) and Department of Machine Intelligence, School of Electronics Engineering and Computer SciencePeking UniversityBeijingPeople’s Republic of China
  2. 2.Complex Engineered System Lab (CESL), Department of Electronic EngineeringTsinghua UniversityBeijingPeople’s Republic of China

Personalised recommendations