Skip to main content

RoKGDS: A Robust Knowledge Grounded Dialog System

  • Conference paper
  • First Online:
Natural Language Processing and Chinese Computing (NLPCC 2021)

Abstract

In this paper, we propose a pre-training based Robust Know-ledge Grounded Dialog System (RoKGDS) to enhance the performance of the model in unknown scenarios, which is easily generalized to various knowledge grounded dialog tasks, such as persona dialog, knowledge dialog, recommendation dialog. We use a bucket encoder to efficiently extract all kinds of knowledge information (e.g. profile, knowledge graph, and dialog goal). To improve the robustness of the model, we develop a hybrid decoder with a hybrid attention and a copy mechanism. The hybrid attention is an adaptation scheme to apply the pre-trained language model to our model and the copy mechanism is a gate mechanism to control generating a word from generic vocabulary or the input knowledge. Experiments show that our model is more robust than the other baseline models. Furthermore, we use visualization to explain the effectiveness of the hybrid attention compared to other two adaptation schemes. In the 2021 Language and Intelligence Challenge: Multi-Skill Dialog task, our best model ranked 3rd in the automatic evaluation stage and 5th in the human evaluation stage.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Code will be available at https://github.com/z562/RoKGDS.

References

  1. Budzianowski, P., Vulić, I.: Hello, it’s GPT-2-how can I help you? Towards the use of pretrained language models for task-oriented dialogue systems. In: Proceedings of the 3rd Workshop on Neural Generation and Translation, pp. 15–22 (2019)

    Google Scholar 

  2. Cui, Y., Che, W., Liu, T., Qin, B., Wang, S., Hu, G.: Revisiting pre-trained models for Chinese natural language processing. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pp. 657–668 (2020)

    Google Scholar 

  3. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

  4. Ghazvininejad, M., et al.: A knowledge-grounded neural conversation model. arXiv preprint arXiv:1702.01932 (2017)

  5. Golovanov, S., Kurbanov, R., Nikolenko, S., Truskovskyi, K., Tselousov, A., Wolf, T.: Large-scale transfer learning for natural language generation. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 6053–6058 (2019)

    Google Scholar 

  6. Huang, M., Zhu, X., Gao, J.: Challenges in building intelligent open-domain dialog systems. ACM Trans. Inf. Syst. (TOIS) 38(3), 1–32 (2020)

    Google Scholar 

  7. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  8. Li, J., Galley, M., Brockett, C., Gao, J., Dolan, W.B.: A diversity-promoting objective function for neural conversation models. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 110–119 (2016)

    Google Scholar 

  9. Li, J., Galley, M., Brockett, C., Spithourakis, G., Gao, J., Dolan, W.B.: A persona-based neural conversation model. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 994–1003 (2016)

    Google Scholar 

  10. Liu, Z., Wang, H., Niu, Z.Y., Wu, H., Che, W., Liu, T.: Towards conversational recommendation over multi-type dialogs. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 1036–1049 (2020)

    Google Scholar 

  11. Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pp. 311–318 (2002)

    Google Scholar 

  12. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019)

    Google Scholar 

  13. See, A., Liu, P.J., Manning, C.D.: Get to the point: summarization with pointer-generator networks. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1073–1083 (2017)

    Google Scholar 

  14. Sun, Y., et al.: ERNIE: enhanced representation through knowledge integration. arXiv preprint arXiv:1904.09223 (2019)

  15. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)

    Google Scholar 

  16. Wolf, T., Sanh, V., Chaumond, J., Delangue, C.: Transfer Transfo: a transfer learning approach for neural network based conversational agents. In: NIPS2018 CAI Workshop (2018)

    Google Scholar 

  17. Wu, W., et al.: Proactive human-machine conversation with explicit conversation goal. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 3794–3804 (2019)

    Google Scholar 

  18. Xing, C., et al.: Topic aware neural response generation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31 (2017)

    Google Scholar 

  19. Zhang, S., Dinan, E., Urbanek, J., Szlam, A., Kiela, D., Weston, J.: Personalizing dialogue agents: I have a dog, do you have pets too? In: ACL, pp. 2204–2213 (2018)

    Google Scholar 

Download references

Acknowledgements

This research is funded by the Science and Technology Commission of Shanghai Municipality (20511101205), Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, No. 2020KEY001.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yan Yang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, J. et al. (2021). RoKGDS: A Robust Knowledge Grounded Dialog System. In: Wang, L., Feng, Y., Hong, Y., He, R. (eds) Natural Language Processing and Chinese Computing. NLPCC 2021. Lecture Notes in Computer Science(), vol 13029. Springer, Cham. https://doi.org/10.1007/978-3-030-88483-3_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-88483-3_30

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-88482-6

  • Online ISBN: 978-3-030-88483-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics