Advertisement

Summary++: Summarizing Chinese News Articles with Attention

  • Juan Zhao
  • Tong Lee Chung
  • Bin XuEmail author
  • Minghu Jiang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11109)

Abstract

We present Summary++, the model that competed in NLPCC2018’s Summary task. In this paper, we describe in detail of the task, our model, the results and other aspects during our experiments. The task is News article summarization in Chinese, where one sentence is generated per article. We use a neural encoder decoder attention model with pointer generator network, and modify it to focus on words attented to rather than words predicted. Our model archive second place in the task with a score of 0.285. The highlights of our model is that it run at character level, no extra features (e.g. part of speech, dependency structure) were used and very little preprocessing were done.

Keywords

Text summarization Sequence-to-sequence Pointer Coverage 

Notes

Acknowledgement

This work was supported by the National Key Research and Development Program of China 2017YFB1401903, National Social Science Major Fund (14ZDB154; 15ZDB017) of China, and the National Natural Science Key Foundation of China (61433015).

References

  1. 1.
    Nenkova, A., McKeown, K.: Automatic summarization. Found. Trends Inf. Retr. 5(2–3), 103–233 (2011)CrossRefGoogle Scholar
  2. 2.
    See, A., Liu, P.J., Manning, C.D.: Get to the point: summarization with pointer-generator networks. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics (2017)Google Scholar
  3. 3.
    Lin, C.-Y.: Rouge: a package for automatic evaluation of summaries. In: Text Summarization Branches Out: Proceedings of the ACL-04 WorkshopGoogle Scholar
  4. 4.
    Vinyals, O., Fortunato, M., Jaitly, N.: Pointer networks. In: Cortes, C., Lawrence, N.D., Lee, D.D., Sugiyama, M., Garnett, R. (eds.) Advances in Neural Information Processing Systems 28, pp. 2692–2700. Curran Associates Inc. (2015)Google Scholar
  5. 5.
    Mi, H., Sankaran, B., Wang, Z., Ittycheriah, A.: Coverage embedding models for neural machine translation. In: EMNLP (2016)Google Scholar
  6. 6.
    Saggion, H., Poibeau, T.: Automatic text summarization: past, present and future. In: Poibeau, T., Saggion, H., Piskorski, J., Yangarber, R. (eds.) Multi-source. Theory and Applications of Natural Language Processing, pp. 3–21. Multilingual Information Extraction and Summarization. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-28569-1_1CrossRefGoogle Scholar
  7. 7.
    Nallapati, R., Zhou, B., dos Santos, C.N., Gülçehre, Ç., Xiang, B.: Abstractive text summarization using sequence-to-sequence RNNs and beyond. In: Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, CoNLL 2016, Berlin, Germany, 11–12 August 2016 (2016)Google Scholar
  8. 8.
    Cheng, J., Lapata, M.: Neural summarization by extracting sentences and words. In: Meeting of the Association for Computational Linguistics, pp. 484–494 (2016)Google Scholar
  9. 9.
    Nallapati, R., Zhou, B., Ma, M.: Classify or select: Neural architectures for extractive document summarization. CoRR, abs/1611.04244 (2016)Google Scholar
  10. 10.
    Cao, Z., Li, W., Li, S., Wei, F., Li, Y.: Attsum: joint learning of focusing and summarization with neural attention. In: International Conference on Computational Linguistics, pp. 547–556 (2016)Google Scholar
  11. 11.
    Lopyrev, K.: Generating news headlines with recurrent neural networks. arXiv:1512.01712 Computation and Language (2015)
  12. 12.
    Ayana, S.S., Zhao, Y., Liu, Z., Sun, M.: Neural headline generation with sentence-wise optimization. arXiv:1604.01904v2 Computation and Language (2016)
  13. 13.
    Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. In: International Conference on Learning Representations (ICLR) (2015)Google Scholar
  14. 14.
    Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Proceedings of the 27th International Conference on Neural Information Processing Systems, vol. 2, NIPS 2014 (2014)Google Scholar
  15. 15.
    Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, Q.V., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) Advances in Neural Information Processing Systems 30, pp. 6000–6010. Curran Associates Inc. (2017)Google Scholar
  16. 16.
    Fan, A., Grangier, D., Auli, M.: Controllable abstractive summarization. CoRR, abs/1711.05217 (2017)Google Scholar
  17. 17.
    Kikuchi, Y., Neubig, G., Sasano, R., Takamura, Y., Okumura, M.: Controlling output length in neural encoder-decoders. CoRR, abs/1609.09552 (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Juan Zhao
    • 1
  • Tong Lee Chung
    • 2
  • Bin Xu
    • 2
    Email author
  • Minghu Jiang
    • 1
  1. 1.Lab of Computational Linguistics, School of HumanitiesTsinghua UniversityBeijingChina
  2. 2.Computer Science DepartmentTsinghua UniversityBeijingChina

Personalised recommendations