Abstract
Neural summarization models have a fixed-size input limitation: if text length surpasses the model’s maximal input length, some document content (possibly summary-relevant) gets truncated. Independently summarizing windows of maximal input size disallows for information flow between windows and leads to incoherent summaries. We propose windowing models for neural abstractive summarization of (arbitrarily) long texts. We extend the sequence-to-sequence model augmented with pointer generator network by (1) allowing the encoder to slide over different windows of the input document and (2) sharing the decoder and retaining its state across different input windows. We explore two windowing variants: Static Windowing precomputes the number of tokens for the decoder to generate from each window (based on training corpus statistics); in Dynamic Windowing the decoder learns to emit a token signaling the shift to the next input window. Empirical results render our models effective in intended use-case: summarizing long texts with relevant content not bound to document beginning.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
We experimented also with Transformer [17] encoder/decoder, but obtained worse results.
- 2.
We pad the last window(s), if shorter than \(T_w\) tokens.
- 3.
For example, with \(d=1.2\) and \(k=0.8\), the early windows will receive larger weights than the later windows.
- 4.
- 5.
Depending on \(T_w\) and ss, a sentence can appear in more than one window. In such cases, we map the sentence to its last containing window.
- 6.
References
Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. In: Proceedings of ICLR (2015). http://arxiv.org/abs/1409.0473
Celikyilmaz, A., Bosselut, A., He, X., Choi, Y.: Deep communicating agents for abstractive summarization. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 1 (Long Papers), pp. 1662–1675 (2018)
Cohan, A., Dernoncourt, F., Kim, D.S., Bui, T., Kim, S., Chang, W., Goharian, N.: A discourse-aware attention model for abstractive summarization of long documents. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 2 (Short Papers), pp. 615–621 (2018)
Conneau, A., Kiela, D., Schwenk, H., Barrault, L., Bordes, A.: Supervised learning of universal sentence representations from natural language inference data. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 670–680 (2017)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186 (2019)
Hermann, K.M., et al.: Teaching machines to read and comprehend. In: Advances in Neural Information Processing Systems, pp. 1693–1701 (2015)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2015)
Koupaee, M., Wang, W.Y.: Wikihow: a large scale text summarization dataset. CoRR abs/1810.09305 (2018). http://arxiv.org/abs/1810.09305
Kusner, M., Sun, Y., Kolkin, N., Weinberger, K.: From word embeddings to document distances. In: International Conference on Machine Learning, pp. 957–966 (2015)
Luong, M., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation. In: Proceedings of EMNLP, pp. 1412–1421 (2015). http://arxiv.org/abs/1508.04025
Makino, T., Iwakura, T., Takamura, H., Okumura, M.: Global optimization under length constraint for neural text summarization. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 1039–1048 (2019). https://www.aclweb.org/anthology/P19-1099
Nallapati, R., Xiang, B., Zhou, B.: Sequence-to-sequence RNNs for text summarization. In: Proceedings of ICLR: Workshop Track (2016). http://arxiv.org/abs/1602.06023
Nenkova, A., McKeown, K.R.: Automatic summarization. Found. Trends Inf. Retr. 5(2–3), 103–233 (2011)
Paulus, R., Xiong, C., Socher, R.: A deep reinforced model for abstractive summarization. In: Proceedings of ICLR (2018). http://arxiv.org/abs/1705.04304
See, A., Liu, P.J., Manning, C.D.: Get to the point: summarization with pointer-generator networks. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1073–1083 (2017)
Tan, J., Wan, X., Xiao, J.: Abstractive document summarization with a graph-based attentional neural model. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1171–1181. Association for Computational Linguistics (2017). https://doi.org/10.18653/v1/P17-1108. http://aclweb.org/anthology/P17-1108
Vaswani, A., et al.: Attention is all you need. In: Proceedings of NeurIPS (2017)
You, Y., Jia, W., Liu, T., Yang, W.: Improving abstractive document summarization with salient information modeling. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2132–2141 (2019). https://www.aclweb.org/anthology/P19-1205
Zhelezniak, V., Savkov, A., Shen, A., Moramarco, F., Flann, J., Hammerla, N.Y.: Don’t settle for average, go for the max: fuzzy sets and max-pooled word vectors. In: Proceedings of ICLR (2019)
Acknowledgment
The work of Goran Glavaš is supported by the Baden Württemberg Stiftung (Eliteprogramm, AGREE grant).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Schüller, L., Wilhelm, F., Kreiling, N., Glavaš, G. (2021). Windowing Models for Abstractive Summarization of Long Texts. In: Hiemstra, D., Moens, MF., Mothe, J., Perego, R., Potthast, M., Sebastiani, F. (eds) Advances in Information Retrieval. ECIR 2021. Lecture Notes in Computer Science(), vol 12657. Springer, Cham. https://doi.org/10.1007/978-3-030-72240-1_39
Download citation
DOI: https://doi.org/10.1007/978-3-030-72240-1_39
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-72239-5
Online ISBN: 978-3-030-72240-1
eBook Packages: Computer ScienceComputer Science (R0)