Abstract
Currently, the mainstream text summarization techniques are divided into extractive and abstractive methods. Extractive method is suitable for long texts with a clear structure, while abstractive method is suitable for short texts. In this paper, we aim to address the problems of missing key words and incomplete overview that are usually caused by abstractive method in the face of long texts. To solve this problem, we propose a two-stage model that uses both extractive and abstractive methods for generating summaries. Firstly, we use multi-layer BiLSTM for long text summary extraction. Secondly, we use the classical UniLM as the base model while adding a novel copy mechanism to tackle out-of-vocabulary (OOV) problem and using the sparse softmax to avoid overfitting. Extensive experiments demonstrate that our models perform better than other baseline models, and our models can generate higher quality summaries.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
References
Bae, S., Kim, T., Kim, J., Lee, S.G.: Summary level training of sentence rewriting for abstractive summarization. arXiv preprint arXiv:1909.08752 (2019)
Bao, G., Zhang, Y.: Contextualized rewriting for text summarization. arXiv preprint arXiv:2102.00385 (2021)
Chen, T., Xu, R., He, Y., Wang, X.: Improving sentiment analysis via sentence type classification using BiLSTM-CRF and CNN. Expert Syst. Appl. 72, 221–230 (2017)
Cheng, J., Lapata, M.: Neural summarization by extracting sentences and words. arXiv preprint arXiv:1603.07252 (2016)
Cui, Y., et al.: Pre-training with whole word masking for Chinese BERT. arXiv preprint arXiv:1906.08101 (2019)
Dong, L., et al.: Unified language model pre-training for natural language understanding and generation. arXiv preprint arXiv:1905.03197 (2019)
Gehring, J., Auli, M., Grangier, D., Yarats, D., Dauphin, Y.N.: Convolutional sequence to sequence learning. In: International Conference on Machine Learning, pp. 1243–1252. PMLR (2017)
Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)
Hsu, W.T., Lin, C.K., Lee, M.Y., Min, K., Tang, J., Sun, M.: A unified model for extractive and abstractive summarization using inconsistency loss. arXiv preprint arXiv:1805.06266 (2018)
Hua, L., Wan, X., Li, L.: Overview of the NLPCC 2017 shared task: single document summarization. In: Huang, X., Jiang, J., Zhao, D., Feng, Y., Hong, Yu. (eds.) NLPCC 2017. LNCS (LNAI), vol. 10619, pp. 942–947. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-73618-1_84
Jones, S.: KAREN: a statistical interpretation of term specificity and its application in retrieval. J. Documentation 28(1), 11–21 (1972)
Kupiec, J., Pedersen, J., Chen, F.: A trainable document summarizer. In: Proceedings of the 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 68–73 (1995)
Li, L., Wan, X.: Overview of the NLPCC 2018 shared task: single document summarization. In: Zhang, M., Ng, V., Zhao, D., Li, S., Zan, H. (eds.) NLPCC 2018. LNCS (LNAI), vol. 11109, pp. 457–463. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99501-4_44
Lin, C.Y.: ROUGE: a package for automatic evaluation of summaries. In: Text Summarization Branches Out, pp. 74–81 (2004)
Liu, Y.: Fine-tune BERT for extractive summarization. arXiv preprint arXiv:1903.10318 (2019)
Liu, Y., et al.: Roberta: a robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 (2019)
Luhn, H.P.: The automatic creation of literature abstracts. IBM J. Res. Dev. 2(2), 159–165 (1958). https://doi.org/10.1147/rd.22.0159
Martins, A., Astudillo, R.: From SoftMax to SparseMax: a sparse model of attention and multi-label classification. In: International Conference on Machine Learning, pp. 1614–1623. PMLR (2016)
Mendes, A., Narayan, S., Miranda, S., Marinho, Z., Martins, A.F., Cohen, S.B.: Jointly extracting and compressing documents with summary state representations. arXiv preprint arXiv:1904.02020 (2019)
Mihalcea, R., Tarau, P.: TextRank: bringing order into text. In: Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pp. 404–411 (2004)
Nallapati, R., Zhai, F., Zhou, B.: SummaRuNNEr: a recurrent neural network based sequence model for extractive summarization of documents. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31 (2017)
Peters, B., Niculae, V., Martins, A.F.: Sparse sequence-to-sequence models. arXiv preprint arXiv:1905.05702 (2019)
Rush, A.M., Harvard, S., Chopra, S., Weston, J.: A neural attention model for sentence summarization. In: ACLWeb. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (2017)
Stiennon, N., et al.: Learning to summarize from human feedback. arXiv preprint arXiv:2009.01325 (2020)
Su, J.: Spaces: extractive-generative long text summaries (CAIL 2020) (2021). https://kexue.fm/archives/8046
Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. arXiv preprint arXiv:1409.3215 (2014)
Vaswani, A., et al.: Attention is all you need. arXiv preprint arXiv:1706.03762 (2017)
Vinyals, O., Fortunato, M., Jaitly, N.: Pointer networks. arXiv preprint arXiv:1506.03134 (2015)
Wasson, M.: Using leading text for news summaries: Evaluation results and implications for commercial summarization applications. In: 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, vol. 2, pp. 1364–1368 (1998)
Zhong, M., Liu, P., Chen, Y., Wang, D., Qiu, X., Huang, X.: Extractive summarization as text matching. arXiv preprint arXiv:2004.08795 (2020)
Acknowledgement
This work was supported in part by the National Natural Science Foundation of China under Grant U1811263 and Grant 61772211.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Liang, R., Li, J., Huang, L., Lin, R., Lai, Y., Xiong, D. (2022). Extractive-Abstractive: A Two-Stage Model for Long Text Summarization. In: Sun, Y., et al. Computer Supported Cooperative Work and Social Computing. ChineseCSCW 2021. Communications in Computer and Information Science, vol 1492. Springer, Singapore. https://doi.org/10.1007/978-981-19-4549-6_14
Download citation
DOI: https://doi.org/10.1007/978-981-19-4549-6_14
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-4548-9
Online ISBN: 978-981-19-4549-6
eBook Packages: Computer ScienceComputer Science (R0)