Abstract
Generating informative, coherent and fluent responses to user queries is challenging yet critical for a rich user experience and the eventual success of dialogue systems. Knowledge-grounded dialogue systems leverage external knowledge to induce relevant facts in a dialogue. These systems need to understand the semantic relatedness between the dialogue context and the available knowledge, thereby utilising this information for response generation. Although various innovative models have been proposed, they neither utilise the semantic entailment between the dialogue history and the knowledge nor effectively process knowledge from both structured and unstructured sources. In this work, we propose PICKD, a two-stage framework for knowledgeable dialogue. The first stage involves the Knowledge Selector choosing knowledge pertinent to the dialogue context from both structured and unstructured knowledge sources. PICKD leverages novel In-Situ prompt tuning for knowledge selection, wherein prompt tokens are injected into the dialogue-knowledge text tokens during knowledge retrieval. The second stage employs the Response Generator for generating fluent and factual responses by utilising the retrieved knowledge and the dialogue context. Extensive experiments on three domain-specific datasets exhibit the effectiveness of PICKD over other baseline methodologies for knowledge-grounded dialogue. The source is available at https://github.com/rajbsk/pickd.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Dinan, E., Roller, S., Shuster, K., Fan, A., Auli, M., Weston, J.: Wizard of Wikipedia: knowledge-powered conversational agents. In: ICLR (2019)
Fu, T., Zhao, X., Tao, C., Wen, J., Yan, R.: There are a thousand hamlets in a thousand people’s eyes: enhancing knowledge-grounded dialogue with personal memory. In: ACL (2022)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2015)
Lewis, M., et al.: BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In: ACL (2020)
Liu, Y., et al.: RoBERTa: a robustly optimized BERT pretraining approach. CoRR abs/1907.11692 (2019)
Liu, Z., et al.: Multi-stage prompting for knowledgeable dialogue generation. In: Findings of the ACL (2022)
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019)
Roller, S., et al.: Recipes for building an open-domain chatbot. In: EACL (2021)
Serban, I.V., Sordoni, A., Bengio, Y., Courville, A.C., Pineau, J.: Building end-to-end dialogue systems using generative hierarchical neural network models. In: AAAI (2016)
Sordoni, A., et al.: A neural network approach to context-sensitive generation of conversational responses. In: NAACL (2015)
Wang, K., et al.: RT-KGD: relation transition aware knowledge-grounded dialogue generation. In: ISWC (2022)
Wu, Z., Bi, W., Li, X., Kong, L., Kao, B.: Lexical knowledge internalization for neural dialog generation. In: Proceedings of the ACL (2022)
Yang, C., et al.: TAKE: topic-shift aware knowledge selection for dialogue generation. In: COLING (2022)
Zhang, S., Du, Y., Liu, G., Yan, Z., Cao, Y.: G4: grounding-guided goal-oriented dialogues generation with multiple documents. In: Proceedings of the Second DialDoc@ACL 2022 (2022)
Zhang, Y., et al.: DIALOGPT : Large-scale generative pre-training for conversational response generation. In: ACL (2020)
Zhao, X., Wang, L., He, R., Yang, T., Chang, J., Wang, R.: Multiple knowledge syncretic transformer for natural dialogue generation. In: WWW (2020)
Zhao, X., Wu, W., Xu, C., Tao, C., Zhao, D., Yan, R.: Knowledge-grounded dialogue generation with pre-trained language models. In: Webber, B., Cohn, T., He, Y., Liu, Y. (eds.) EMNLP (2020)
Zheng, C., Huang, M.: Exploring prompt-based few-shot learning for grounded dialog generation. CoRR abs/2109.06513 (2021)
Zhou, H., Huang, M., Liu, Y., Chen, W., Zhu, X.: EARL: informative knowledge-grounded conversation generation with entity-agnostic representation learning. In: EMNLP (2021)
Zhou, H., Young, T., Huang, M., Zhao, H., Xu, J., Zhu, X.: Commonsense knowledge aware conversation generation with graph attention. In: IJCAI (2018)
Zhou, H., Zheng, C., Huang, K., Huang, M., Zhu, X.: KdConv: a Chinese multi-domain dialogue dataset towards multi-turn knowledge-driven conversation. In: ACL (2020)
Zhu, W., Mo, K., Zhang, Y., Zhu, Z., Peng, X., Yang, Q.: Flexible end-to-end dialogue system for knowledge grounded conversation. CoRR abs/1709.04264 (2017)
Acknowledgements
This work is supported by a grant from The Government of Ireland Postgraduate Fellowship, Irish Research Council under project ID GOIPG/2019/3480. The work is also co-supported by Science Foundation Ireland under grant number SFI/12/RC/2289 2 (Insight).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Sarkar, R., Goswami, K., Arcan, M., McCrae, J. (2023). PICKD: In-Situ Prompt Tuning for Knowledge-Grounded Dialogue Generation. In: Kashima, H., Ide, T., Peng, WC. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2023. Lecture Notes in Computer Science(), vol 13938. Springer, Cham. https://doi.org/10.1007/978-3-031-33383-5_10
Download citation
DOI: https://doi.org/10.1007/978-3-031-33383-5_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-33382-8
Online ISBN: 978-3-031-33383-5
eBook Packages: Computer ScienceComputer Science (R0)