Advertisement

Deletion-Based Sentence Compression Using Bi-enc-dec LSTM

  • Dac-Viet Lai
  • Nguyen Truong Son
  • Nguyen Le Minh
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 781)

Abstract

We propose a combined model of enhanced Bidirectional Long Short Term Memory (Bi-LSTM) and well-known classifiers such as Conditional Random Field (CRF) and Support Vector Machine (SVM) for compressing sentence, in which LSTM network works as a feature extractor. The task is to classify words into two categories: to be retained or to be removed. Facing the lack of reliable feature generating techniques in many languages, we employ the obtainable word embedding as the exclusive feature. Our models are trained and evaluated on public English and Vietnamese data sets, showing their state-of-the-art performance.

Keywords

Text summarization Sentence compression Bidirectional LSTM Sequence to sequence Conditional random field Support vector machine 

Notes

Acknowledgment

This work was supported by JSPS KAKENHI Grant number JP15K16048.

References

  1. 1.
    Cheng, J., Lapata, M.: Neural summarization by extracting sentences and words (2016)Google Scholar
  2. 2.
    Chopra, S., Auli, M., Rush, A.M., Harvard, S.: Abstractive sentence summarization with attentive recurrent neural networks. In: Proceedings of NAACL-HLT16, pp. 93–98 (2016)Google Scholar
  3. 3.
    Clarke, J., Lapata, M.: Global inference for sentence compression: an integer linear programming approach. J. Artif. Intell. Res. 31, 399–429 (2008)MATHGoogle Scholar
  4. 4.
    Cohn, T., Lapata, M.: Sentence compression beyond word deletion. In: COLING, pp. 137–144. Association for Computational Linguistics (2008)Google Scholar
  5. 5.
    Dyer, C., Ballesteros, M., Ling, W., Matthews, A., Smith, N.A.: Transition-based dependency parsing with stack long short-term memory (2015)Google Scholar
  6. 6.
    Filippova, K., Alfonseca, E., Colmenares, C.A., Kaiser, L., Vinyals, O.: Sentence compression by deletion with LSTMs. In: EMNLP, pp. 360–368 (2015)Google Scholar
  7. 7.
    Filippova, K., Altun, Y.: Overcoming the lack of parallel data in sentence compression. In: EMNLP, pp. 1481–1491. Citeseer (2013)Google Scholar
  8. 8.
    Filippova, K., Strube, M.: Dependency tree based sentence compression. In: INLG, pp. 25–32 (2008)Google Scholar
  9. 9.
    Hinton, G., Deng, L., Yu, D., Dahl, G.E., Mohamed, A.R., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T.N., et al.: Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process. Mag. 29, 82–97 (2012)CrossRefGoogle Scholar
  10. 10.
    Hochreiter, S., Schmidhuber, J.: Long Short-Term Memory, vol. 9, pp. 1735–1780. MIT Press, Cambridge (1997)Google Scholar
  11. 11.
    Huang, Z., Xu, W., Yu, K.: Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991 (2015)
  12. 12.
    Klerke, S., Goldberg, Y., Søgaard, A.: Improving sentence compression by learning to predict gaze. In: Proceedings of NAACL-HLT, pp. 1528–1533 (2016)Google Scholar
  13. 13.
    Knight, K., Marcu, D.: Statistics-based summarization-step one: sentence compression 2000, pp. 703–710 (2000)Google Scholar
  14. 14.
    Lample, G., Ballesteros, M., Subramanian, S., Kawakami, K., Dyer, C.: Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360 (2016)
  15. 15.
    Le, N.M., Horiguchi, S.: A new sentence reduction based on decision tree model. In: Proceedings of the 17th PACLIC, pp. 290–297 (2003)Google Scholar
  16. 16.
    Le Nguyen, M., Shimazu, A., Horiguchi, S., Ho, B.T., Fukushi, M.: Probabilistic sentence reduction using support vector machines. In: Proceedings of the 20th COLING, p. 743. Association for Computational Linguistics (2004)Google Scholar
  17. 17.
    Luong, M.T., Sutskever, I., Le, Q.V., Vinyals, O., Zaremba, W.: Addressing the rare word problem in neural machine translation (2014)Google Scholar
  18. 18.
    McDonald, R.T.: Discriminative sentence compression with soft syntactic evidence. In: EACL (2006)Google Scholar
  19. 19.
    Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: NIPS, pp. 3111–3119 (2013)Google Scholar
  20. 20.
    Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation. In: EMNLP, vol. 14, pp. 1532–1543 (2014)Google Scholar
  21. 21.
    Qian, X., Liu, Y.: Fast joint compression and summarization via graph cuts. In: EMNLP, pp. 1492–1502 (2013)Google Scholar
  22. 22.
    Quirk, C., Brockett, C., Dolan, W.B.: Monolingual machine translation for paraphrase generation. In: EMNLP, pp. 142–149 (2004)Google Scholar
  23. 23.
    Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: NIPS, pp. 3104–3112. MIT Press (2014)Google Scholar
  24. 24.
    Tran, N.T., Luong, V.T., Nguyen, N.L.T., Nghiem, M.Q.: Effective attention-based neural architectures for sentence compression with bidirectional long short-term memory. In: Proceedings of the 7th SoICT, pp. 123–130 (2016)Google Scholar
  25. 25.
    Tran, N.T., Ung, V.G., Luong, A.V., Nghiem, M.Q., Nguyen, N.L.T.: Improving Vietnamese sentence compression by segmenting meaning chunks. In: Knowledge and Systems Engineering (KSE), pp. 320–323 (2015)Google Scholar
  26. 26.
    Xian, Q., Yang, L.: Polynomial time joint structural inference for sentence compression, vol. 2, pp. 327–332. ACL (2014)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  • Dac-Viet Lai
    • 1
  • Nguyen Truong Son
    • 1
    • 2
  • Nguyen Le Minh
    • 1
  1. 1.Japan Advanced Institute of Science and TechnologyNomiJapan
  2. 2.University of Science, VNU-HCMCHo Chi Minh CityVietnam

Personalised recommendations