Skip to main content

Advances of Transformer-Based Models for News Headline Generation

  • Conference paper
  • First Online:
Artificial Intelligence and Natural Language (AINL 2020)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1292))

Included in the following conference series:

Abstract

Pretrained language models based on Transformer architecture are the reason for recent breakthroughs in many areas of NLP, including sentiment analysis, question answering, named entity recognition. Headline generation is a special kind of text summarization task. Models need to have strong natural language understanding that goes beyond the meaning of individual words and sentences and an ability to distinguish essential information to succeed in it. In this paper, we fine-tune two pretrained Transformer-based models (mBART and BertSumAbs) for that task and achieve new state-of-the-art results on the RIA and Lenta datasets of Russian news. BertSumAbs increases ROUGE on average by 2.9 and 2.0 points respectively over previous best score achieved by Phrase-Based Attentional Transformer and CopyNet.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/leshanbog/PreSumm.

  2. 2.

    https://github.com/yutkin/Lenta.Ru-News-Dataset.

  3. 3.

    https://github.com/IlyaGusev/summarus.

  4. 4.

    https://ria.ru/.

  5. 5.

    https://toloka.yandex.com/.

References

  1. Murao, K., et al.: A case study on neural headline generation for editing support. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Industry Papers), pp. 73–82 (2019)

    Google Scholar 

  2. Sun, C., Qiu, X., Xu, Y., Huang, X.: How to fine-tune BERT for text classification? In: Sun, M., Huang, X., Ji, H., Liu, Z., Liu, Y. (eds.) CCL 2019. LNCS (LNAI), vol. 11856, pp. 194–206. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32381-3_16

    Chapter  Google Scholar 

  3. Rush, A.M., Chopra, S., Weston, J.: A neural attention model for abstractive sentence summarization. In: Empirical Methods in Natural Language Processing, pp. 379–389 (2015)

    Google Scholar 

  4. Vaswani, A., et al.: Attention is all you need. In: NeurIPS (2017)

    Google Scholar 

  5. Qiu, X., Sun, T., Xu, Y., Shao, Y., Dai, N., Huang, X.: Pre-trained models for natural language processing: a survey (2020). arXiv preprint arXiv:2003.08271

  6. Liu, Y., et al.: Multilingual denoising pre-training for neural machine translation (2020). arXiv preprint arXiv:2001.08210

  7. Liu, Y., Lapata, M.: Text summarization with pretrained encoders. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) (2019)

    Google Scholar 

  8. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding (2018). arXiv preprint arXiv:1810.04805

  9. Kuratov, Y., Arkhipov, M.: Adaptation of deep bidirectional multilingual transformers for Russian language (2019). arXiv preprint arXiv:1905.07213

  10. Sokolov, A.: Phrase-based attentional transformer for headline generation. In: Computational Linguistics and Intellectual Technologies: Proceedings of the International Conference “Dialogue 2019” (2019)

    Google Scholar 

  11. Gavrilov, D., Kalaidin, P., Malykh, V.: Self-attentive model for headline generation. In: Azzopardi, L., Stein, B., Fuhr, N., Mayr, P., Hauff, C., Hiemstra, D. (eds.) ECIR 2019. LNCS, vol. 11438, pp. 87–93. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-15719-7_11

    Chapter  Google Scholar 

  12. Gusev, I.O.: Importance of copying mechanism for news headline generation. In: Komp’juternaja Lingvistika i Intellektual’nye Tehnologii, pp. 229–236. ABBYY Production LLC (2019)

    Google Scholar 

  13. Lin, C.Y., Och, F.J.: ROUGE and its evaluation. In: NTCIR Workshop, Looking for a Few Good Metrics (2004)

    Google Scholar 

  14. Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: BLEU: a method for automatic evaluation of machine translation. In: 40th Annual meeting of the Association for Computational Linguistics, pp. 311–318 (2002)

    Google Scholar 

  15. Gu, J., Lu, Z., Li, H., Li, V.O.: Incorporating copying mechanism in sequence-to-sequence learning (2016). arXiv preprint arXiv:1603.06393

  16. See, A., Liu, P., Manning, C.: Get to the point: summarization with pointer-generator networks. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, vol. 1, pp. 1073–1083. Association for Computational Linguistics (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexey Bukhtiyarov .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bukhtiyarov, A., Gusev, I. (2020). Advances of Transformer-Based Models for News Headline Generation. In: Filchenkov, A., Kauttonen, J., Pivovarova, L. (eds) Artificial Intelligence and Natural Language. AINL 2020. Communications in Computer and Information Science, vol 1292. Springer, Cham. https://doi.org/10.1007/978-3-030-59082-6_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-59082-6_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-59081-9

  • Online ISBN: 978-3-030-59082-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics