Skip to main content

Training Two-Stage Knowledge-Grounded Dialogues with Attention Feedback

  • Conference paper
  • First Online:
Natural Language Processing and Chinese Computing (NLPCC 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13551))

Abstract

Knowledge-grounded retrieval-based dialogue systems have attracted more and more attention. Among them, the two-stage dialogue models which separate the training stage into knowledge retrieving (via a retriever) and response ranking (via a ranker) are proved powerful. However, these approaches require knowledge-grounded dialogues with corresponding hand-annotated knowledge labels. Therefore, in this paper, we propose training two-stage knowledge-grounded dialogues with knowledge attention feedback from the ranker to the retriever. In each training iteration, the ranker provides knowledge attention scores as pseudo supervised feedback for the optimization of retriever. We conduct experiments on two public data sets. The experimental results demonstrate that our proposed method is superior to the existing baselines.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

  2. Dinan, E., Roller, S., Shuster, K., Fan, A., Auli, M., Weston, J.: Wizard of Wikipedia: knowledge-powered conversational agents. arXiv preprint arXiv:1811.01241 (2018)

  3. Gu, J.C., Ling, Z.H., Liu, Q., Chen, Z., Zhu, X.: Filtering before iteratively referring for knowledge-grounded response selection in retrieval-based chatbots. arXiv preprint arXiv:2004.14550 (2020)

  4. Gu, J.C., Ling, Z.H., Zhu, X., Liu, Q.: Dually interactive matching network for personalized response selection in retrieval-based chatbots. arXiv preprint arXiv:1908.05859 (2019)

  5. Han, J., Hong, T., Kim, B., Ko, Y., Seo, J.: Fine-grained post-training for improving retrieval-based dialogue systems. In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1549–1558 (2021)

    Google Scholar 

  6. Hua, K., Feng, Z., Tao, C., Yan, R., Zhang, L.: Learning to detect relevant contexts and knowledge for response selection in retrieval-based dialogue systems. In: Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pp. 525–534 (2020)

    Google Scholar 

  7. Humeau, S., Shuster, K., Lachaux, M.A., Weston, J.: Poly-encoders: transformer architectures and pre-training strategies for fast and accurate multi-sentence scoring. arXiv preprint arXiv:1905.01969 (2019)

  8. Izacard, G., Grave, E.: Distilling knowledge from reader to retriever for question answering. arXiv preprint arXiv:2012.04584 (2020)

  9. Ji, Z., Lu, Z., Li, H.: An information retrieval approach to short text conversation. arXiv preprint arXiv:1408.6988 (2014)

  10. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  11. Liu, Y., et al.: RoBERTa: a robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 (2019)

  12. Lowe, R., Pow, N., Serban, I., Pineau, J.: The ubuntu dialogue corpus: a large dataset for research in unstructured multi-turn dialogue systems. arXiv preprint arXiv:1506.08909 (2015)

  13. Mazaré, P.E., Humeau, S., Raison, M., Bordes, A.: Training millions of personalized dialogue agents. arXiv preprint arXiv:1809.01984 (2018)

  14. Miller, A., Fisch, A., Dodge, J., Karimi, A.H., Bordes, A., Weston, J.: Key-value memory networks for directly reading documents. arXiv preprint arXiv:1606.03126 (2016)

  15. Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  16. Sachan, D.S., et al.: End-to-end training of neural retrievers for open-domain question answering. arXiv preprint arXiv:2101.00408 (2021)

  17. Shum, H.Y., He, X., Li, D.: From Eliza to Xiaoice: challenges and opportunities with social chatbots. Front. Inf. Technol. Electron. Eng. 19(1), 10–26 (2018)

    Article  Google Scholar 

  18. Sukhbaatar, S., Weston, J., Fergus, R., et al.: End-to-end memory networks. In: Advances in Neural Information Processing Systems, vol. 28 (2015)

    Google Scholar 

  19. Tao, C., Chen, C., Feng, J., Wen, J.R., Yan, R.: A pre-training strategy for zero-resource response selection in knowledge-grounded conversations. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 4446–4457 (2021)

    Google Scholar 

  20. Tao, C., Feng, J., Liu, C., Li, J., Geng, X., Jiang, D.: Building an efficient and effective retrieval-based dialogue system via mutual learning. arXiv preprint arXiv:2110.00159 (2021)

  21. Tao, C., Wu, W., Xu, C., Hu, W., Zhao, D., Yan, R.: One time of interaction may not be enough: go deep with an interaction-over-interaction network for response selection in dialogues. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 1–11 (2019)

    Google Scholar 

  22. Turc, I., Chang, M.W., Lee, K., Toutanova, K.: Well-read students learn better: on the importance of pre-training compact models. arXiv preprint arXiv:1908.08962 (2019)

  23. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  24. Vig, J., Ramea, K.: Comparison of transfer-learning approaches for response selection in multi-turn conversations. In: Workshop on DSTC7 (2019)

    Google Scholar 

  25. Wang, H., Lu, Z., Li, H., Chen, E.: A dataset for research on short-text conversations. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 935–945 (2013)

    Google Scholar 

  26. Wang, M., Lu, Z., Li, H., Liu, Q.: Syntax-based deep matching of short texts. In: 24th International Joint Conference on Artificial Intelligence (2015)

    Google Scholar 

  27. Wu, L., Fisch, A., Chopra, S., Adams, K., Bordes, A., Weston, J.: Starspace: embed all the things! In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

    Google Scholar 

  28. Wu, Y., Wu, W., Xing, C., Zhou, M., Li, Z.: Sequential matching network: a new architecture for multi-turn response selection in retrieval-based chatbots. arXiv preprint arXiv:1612.01627 (2016)

  29. Zhang, C., Wang, H., Jiang, F., Yin, H.: Adapting to context-aware knowledge in natural conversation for multi-turn response selection. In: Proceedings of the Web Conference 2021, pp. 1990–2001 (2021)

    Google Scholar 

  30. Zhang, S., Dinan, E., Urbanek, J., Szlam, A., Kiela, D., Weston, J.: Personalizing dialogue agents: i have a dog, do you have pets too? arXiv preprint arXiv:1801.07243 (2018)

  31. Zhao, X., Tao, C., Wu, W., Xu, C., Zhao, D., Yan, R.: A document-grounded matching network for response selection in retrieval-based chatbots. arXiv preprint arXiv:1906.04362 (2019)

  32. Zhou, K., Prabhumoye, S., Black, A.W.: A dataset for document grounded conversations. arXiv preprint arXiv:1809.07358 (2018)

  33. Zhou, X., et al.: Multi-view response selection for human-computer conversation. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 372–381 (2016)

    Google Scholar 

  34. Zhou, X., et al.: Multi-turn response selection for chatbots with deep attention matching network. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1118–1127 (2018)

    Google Scholar 

Download references

Acknowledgments

This work is supported in part by NSFC (No. 2021YFC3340304). We would like to thank the anonymous reviewers and action editors for their helpful comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dongyan Zhao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, Z., Feng, J., Tao, C., Zhao, D. (2022). Training Two-Stage Knowledge-Grounded Dialogues with Attention Feedback. In: Lu, W., Huang, S., Hong, Y., Zhou, X. (eds) Natural Language Processing and Chinese Computing. NLPCC 2022. Lecture Notes in Computer Science(), vol 13551. Springer, Cham. https://doi.org/10.1007/978-3-031-17120-8_37

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-17120-8_37

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-17119-2

  • Online ISBN: 978-3-031-17120-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics