Abstract
Pretrained language models based on Transformer architecture are the reason for recent breakthroughs in many areas of NLP, including sentiment analysis, question answering, named entity recognition. Headline generation is a special kind of text summarization task. Models need to have strong natural language understanding that goes beyond the meaning of individual words and sentences and an ability to distinguish essential information to succeed in it. In this paper, we fine-tune two pretrained Transformer-based models (mBART and BertSumAbs) for that task and achieve new state-of-the-art results on the RIA and Lenta datasets of Russian news. BertSumAbs increases ROUGE on average by 2.9 and 2.0 points respectively over previous best score achieved by Phrase-Based Attentional Transformer and CopyNet.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Murao, K., et al.: A case study on neural headline generation for editing support. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Industry Papers), pp. 73–82 (2019)
Sun, C., Qiu, X., Xu, Y., Huang, X.: How to fine-tune BERT for text classification? In: Sun, M., Huang, X., Ji, H., Liu, Z., Liu, Y. (eds.) CCL 2019. LNCS (LNAI), vol. 11856, pp. 194–206. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32381-3_16
Rush, A.M., Chopra, S., Weston, J.: A neural attention model for abstractive sentence summarization. In: Empirical Methods in Natural Language Processing, pp. 379–389 (2015)
Vaswani, A., et al.: Attention is all you need. In: NeurIPS (2017)
Qiu, X., Sun, T., Xu, Y., Shao, Y., Dai, N., Huang, X.: Pre-trained models for natural language processing: a survey (2020). arXiv preprint arXiv:2003.08271
Liu, Y., et al.: Multilingual denoising pre-training for neural machine translation (2020). arXiv preprint arXiv:2001.08210
Liu, Y., Lapata, M.: Text summarization with pretrained encoders. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) (2019)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding (2018). arXiv preprint arXiv:1810.04805
Kuratov, Y., Arkhipov, M.: Adaptation of deep bidirectional multilingual transformers for Russian language (2019). arXiv preprint arXiv:1905.07213
Sokolov, A.: Phrase-based attentional transformer for headline generation. In: Computational Linguistics and Intellectual Technologies: Proceedings of the International Conference “Dialogue 2019” (2019)
Gavrilov, D., Kalaidin, P., Malykh, V.: Self-attentive model for headline generation. In: Azzopardi, L., Stein, B., Fuhr, N., Mayr, P., Hauff, C., Hiemstra, D. (eds.) ECIR 2019. LNCS, vol. 11438, pp. 87–93. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-15719-7_11
Gusev, I.O.: Importance of copying mechanism for news headline generation. In: Komp’juternaja Lingvistika i Intellektual’nye Tehnologii, pp. 229–236. ABBYY Production LLC (2019)
Lin, C.Y., Och, F.J.: ROUGE and its evaluation. In: NTCIR Workshop, Looking for a Few Good Metrics (2004)
Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: BLEU: a method for automatic evaluation of machine translation. In: 40th Annual meeting of the Association for Computational Linguistics, pp. 311–318 (2002)
Gu, J., Lu, Z., Li, H., Li, V.O.: Incorporating copying mechanism in sequence-to-sequence learning (2016). arXiv preprint arXiv:1603.06393
See, A., Liu, P., Manning, C.: Get to the point: summarization with pointer-generator networks. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, vol. 1, pp. 1073–1083. Association for Computational Linguistics (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Bukhtiyarov, A., Gusev, I. (2020). Advances of Transformer-Based Models for News Headline Generation. In: Filchenkov, A., Kauttonen, J., Pivovarova, L. (eds) Artificial Intelligence and Natural Language. AINL 2020. Communications in Computer and Information Science, vol 1292. Springer, Cham. https://doi.org/10.1007/978-3-030-59082-6_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-59082-6_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-59081-9
Online ISBN: 978-3-030-59082-6
eBook Packages: Computer ScienceComputer Science (R0)