Abstract
The use of large-scale pre-trained models for text summarization has attracted increasing attention in the computer science community. However, pre-training models with millions of parameters and long training time cause difficulty to deployment. Furthermore, pre-training models focus on understanding language but ignore reproduction of factual details when generating text. In this paper, we propose a method for text summarization that applies knowledge distillation to a pre-trained model called the teacher model. We build a novel sequence-to-sequence model as the student model to learn from the teacher model’s knowledge for imitation. Specifically, we propose a variant of the pointer-generator network, which integrates multi-head attention mechanism, coverage mechanism and copy mechanism. We apply the variant to our student model to solve the word repetition and out-of-vocabulary words problem, so that improving the quality of generation. With experiments on Gigaword and Weibo datasets, our model achieves better performance and costs less time beyond the baseline models.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
The Weibo dataset is available at https://drive.google.com/file/d/1ihnpHuVU1uHAUiaC4EpHjX3gRKp9ZW8h/view.
References
Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. In: ICLR (2015)
Cao, Z., Wei, F., Li, W., Li, S.: Faithful to the original: fact aware neural abstractive summarization. In: AAAI 2018, pp. 4784–4791 (2018)
Chen, Y., Gan, Z., Cheng, Y., Liu, J., Liu, J.: Distilling knowledge learned in BERT for text generation. In: ACL 2020, pp. 7893–7905 (2020)
Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: NAACL-HLT 2019, Volume 1 (Long and Short Papers), pp. 4171–4186 (2019)
Gu, J., Lu, Z., Li, H., Li, V.O.K.: Incorporating copying mechanism in sequence-to-sequence learning. In: ACL 2016, Volume 1: Long Papers (2016)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9, 1735–1780 (1997). https://doi.org/10.1162/neco.1997.9.8.1735
Hinton, G.E., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv: 1503.02531 (2015)
Li, J., Zhang, C., Chen, X., Cao, Y., Jia, R.: Improving abstractive summarization with iterative representation. In: IJCNN 2020, pp. 1–8 (2020)
Lin, C., Rey, M.: ROUGE: A Package for Automatic Evaluation of Summaries (2001)
Lin, J., Sun, X., Ma, S., Su, Q.: Global encoding for abstractive summarization. In: ACL 2018, Volume 2: Short Papers, pp. 163–169 (2018)
Rush, A., Chopra, S., Weston, J.: A neural attention model for abstractive sentence summarization. In: EMNLP 2015, pp. 379–389 (2015)
See, A., Liu, P.J., Manning, C.D.: Get to the point: summarization with pointer generator networks. In: ACL 2017, vol. 1, pp. 1073–1083 (2017)
Song, K., Tan, X., Qin, T., Lu, J., Liu, T.Y.: MASS: masked sequence to sequence pre-training for language generation. In: ICML 2019, pp. 5926–5936 (2019)
Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: NIPS 2014, pp. 3104–3112 (2014)
Tu, Z., Lu, Z., Liu, Y., Liu, X., Li, H.: Modeling coverage for neural machine translation. In: ACL 2016, vol. 1 (2016)
Vaswani, A., et al.: Attention is all you need. In: NIPS 2017, pp. 5998–6008 (2017)
Wang, L., Zhao, W., Jia, R., Li, S., Liu, J.: Denoising based sequence-to-sequence pretraining for text generation. In: EMNLP-IJCNLP 2019, pp. 4001–4013 (2019)
Wang, W., et al.: MiniLM: deep self-attention distillation for task-agnostic compression of pre-trained transformers. In: NeurIPS (2020)
Xu, S., Li, H., Yuan, P., Wu, Y., He, X., Zhou, B.: Self-attention guided copy mechanism for abstractive summarization. In: ACL 2020, pp. 1355–1362 (2020)
Zheng, C., Cai, Y., Zhang, G., Li, Q.: Controllable abstractive sentence summarization with guiding entities. In: COLING 2020, pp. 5668–5678 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Dong, T., Shan, S., Liu, Y., Qian, Y., Ma, A. (2021). A Pointer-Generator Based Abstractive Summarization Model with Knowledge Distillation. In: Mantoro, T., Lee, M., Ayu, M.A., Wong, K.W., Hidayanto, A.N. (eds) Neural Information Processing. ICONIP 2021. Communications in Computer and Information Science, vol 1516. Springer, Cham. https://doi.org/10.1007/978-3-030-92307-5_20
Download citation
DOI: https://doi.org/10.1007/978-3-030-92307-5_20
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-92306-8
Online ISBN: 978-3-030-92307-5
eBook Packages: Computer ScienceComputer Science (R0)