Advertisement

Abstractive Document Summarization via Bidirectional Decoder

  • Xin Wan
  • Chen Li
  • Ruijia Wang
  • Ding Xiao
  • Chuan Shi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11323)

Abstract

Sequence-to-sequence architecture with attention mechanism is widely used in abstractive text summarization, and has achieved a series of remarkable results. However, this method may suffer from error accumulation. That is to say, at the testing stage, the input of decoder is the word generated at the previous time, so that decoder-side error will be continuously amplified. This paper proposes a Summarization model using a Bidirectional decoder (BiSum), in which the backward decoder provides a reference for the forward decoder. We use attention mechanism at both encoder and backward decoder sides to ensure that the summary generated by backward decoder can be understood. Also, pointer mechanism is added in both the backward decoder and the forward decoder to solve the out-of-vocabulary problem. We remove the word segmentation step in regular Chinese preprocessing, which greatly improves the quality of summary. Experimental results show that our work can produce higher-quality summary on Chinese datasets TTNews and English datasets CNN/Daily Mail.

Keywords

Abstractive summarization Bidirectional decoder Attention mechanism Sequence-to-sequence architecture 

Notes

Acknowledgement

This work is supported in part by the National Natural Science Foundation of China (No. 61772082, 61806020, 61375058), and the Beijing Municipal Natural Science Foundation (4182043).

References

  1. 1.
    Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)
  2. 2.
    Baxendale, P.B.: Machine-made index for technical literature-an experiment. IBM J. Res. Dev. 2(4), 354–361 (1958)CrossRefGoogle Scholar
  3. 3.
    Cao, Z., Wei, F., Dong, L., Li, S., Zhou, M.: Ranking with recursive neural networks and its application to multi-document summarization. In: AAAI, pp. 2153–2159 (2015)Google Scholar
  4. 4.
    Duchi, J., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12(Jul), 2121–2159 (2011)MathSciNetzbMATHGoogle Scholar
  5. 5.
    Edmundson, H.P.: New methods in automatic extracting. J. ACM (JACM) 16(2), 264–285 (1969)CrossRefGoogle Scholar
  6. 6.
    Hermann, K.M., Kocisky, T., Grefenstette, E., Espeholt, L., Kay, W., Suleyman, M., Blunsom, P.: Teaching machines to read and comprehend. In: NIPS, pp. 1693–1701 (2015)Google Scholar
  7. 7.
    Hoang, C.D.V., Haffari, G., Cohn, T.: Towards decoding as continuous optimisation in neural machine translation. In: EMNLP, pp. 146–156 (2017)Google Scholar
  8. 8.
    Hou, L., Hu, P., Bei, C.: Abstractive document summarization via neural model with joint attention. In: Huang, X., Jiang, J., Zhao, D., Feng, Y., Hong, Y. (eds.) NLPCC 2017. LNCS (LNAI), vol. 10619, pp. 329–338. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-73618-1_28CrossRefGoogle Scholar
  9. 9.
    Hu, B., Chen, Q., Zhu, F.: LCSTS: a large scale Chinese short text summarization dataset. arXiv preprint arXiv:1506.05865 (2015)
  10. 10.
    Lin, C.Y.: Rouge: a package for automatic evaluation of summaries. ACL workshop, Text Summarization Branches Out (2004)Google Scholar
  11. 11.
    Liu, L., Utiyama, M., Finch, A., Sumita, E.: Agreement on target-bidirectional neural machine translation. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 411–416 (2016)Google Scholar
  12. 12.
    Lopyrev, K.: Generating news headlines with recurrent neural networks. arXiv preprint arXiv:1512.01712 (2015)
  13. 13.
    Nadeau, D., Sekine, S.: A survey of named entity recognition and classification. Lingvisticae Investig. 30(1), 3–26 (2007)CrossRefGoogle Scholar
  14. 14.
    Nallapati, R., Zhai, F., Zhou, B.: SummaRuNNer: a recurrent neural network based sequence model for extractive summarization of documents. In: AAAI, pp. 3075–3081 (2017)Google Scholar
  15. 15.
    Nallapati, R., Zhou, B., Gulcehre, C., Xiang, B., et al.: Abstractive text summarization using sequence-to-sequence RNNs and beyond. arXiv preprint arXiv:1602.06023 (2016)
  16. 16.
    Narayan, S., Papasarantopoulos, N., Cohen, S.B., Lapata, M.: Neural extractive summarization with side information. arXiv preprint arXiv:1704.04530 (2017)
  17. 17.
    Paulus, R., Xiong, C., Socher, R.: A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304 (2017)
  18. 18.
    Rush, A.M., Chopra, S., Weston, J.: A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685 (2015)
  19. 19.
    See, A., Liu, P.J., Manning, C.D.: Get to the point: summarization with pointer-generator networks. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, vol. 1, pp. 1073–1083 (2017)Google Scholar
  20. 20.
    Tan, J., Wan, X., Xiao, J.: Abstractive document summarization with a graph-based attentional neural model. ACL 1, 1171–1181 (2017)Google Scholar
  21. 21.
    Watanabe, T., Sumita, E.: Bidirectional decoding for statistical machine translation. In: Proceedings of the 19th International Conference on Computational Linguistics-Volume 1, pp. 1–7. Association for Computational Linguistics (2002)Google Scholar
  22. 22.
    Zhang, X., Su, J., Qin, Y., Liu, Y., Ji, R., Wang, H.: Asynchronous bidirectional decoding for neural machine translation. arXiv preprint arXiv:1801.05122 (2018)
  23. 23.
    Zhou, Q., Yang, N., Wei, F., Zhou, M.: Sequential copying networks. In: AAAI (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Xin Wan
    • 1
  • Chen Li
    • 1
  • Ruijia Wang
    • 1
  • Ding Xiao
    • 1
  • Chuan Shi
    • 1
  1. 1.Beijing University of Posts and TelecommunicationsBeijingChina

Personalised recommendations