Skip to main content

An Attention-Based Long-Short-Term-Memory Model for Paraphrase Generation

  • Conference paper
  • First Online:
Integrated Uncertainty in Knowledge Modelling and Decision Making (IUKM 2018)

Abstract

Neural network based sequence-to-sequence models have shown to be the effective approach for paraphrase generation. In the problem of paraphrase generation, there are some words which should be ignored in the target text generation. The current models do not pay enough attention to this problem. To overcome this limitation, in this paper we propose a new model which is a penalty coefficient attention-based Residual Long-Short-Term-Memory (PCA-RLSTM) neural network for forming an end-to-end paraphrase generation model. Extensive experiments on the two most popular corpora (PPDB and WikiAnswers) show that our proposed model’s performance is better than the state-of-the-art models for paragraph generation problem.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://apaszke.github.io/lstm-explained.html.

References

  1. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)

  2. Bannard, C., Callison-Burch, C.: Paraphrasing with bilingual parallel corpora. In: Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pp. 597–604. Association for Computational Linguistics (2005)

    Google Scholar 

  3. Bengio, Y., Simard, P., Frasconi, P.: Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Networks 5(2), 157–166 (1994)

    Article  Google Scholar 

  4. Chorowski, J.K., Bahdanau, D., Serdyuk, D., Cho, K., Bengio, Y.: Attention-based models for speech recognition. In: Advances in Neural Information Processing Systems, pp. 577–585 (2015)

    Google Scholar 

  5. Fader, A., Zettlemoyer, L., Etzioni, O.: Paraphrase-driven learning for open question answering. In: Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), vol. 1, pp. 1608–1618 (2013)

    Google Scholar 

  6. Graves, A., Jaitly, N., Mohamed, A.-R.: Hybrid speech recognition with deep bidirectional LSTM. In: 2013 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pp. 273–278. IEEE (2013)

    Google Scholar 

  7. Graves, A., Wayne, G., Danihelka, I.: Neural turing machines. arXiv preprint arXiv:1410.5401 (2014)

  8. Hasan, S.A., Liu, B., Liu, J., Qadir, A., Lee, K., Datla, V., Prakash, A., Farri, O.: Neural clinical paraphrase generation with attention. In: ClinicalNLP 2016, p. 42 (2016)

    Google Scholar 

  9. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  10. Huang, G., Liu, Z., Weinberger, K.Q., van der Maaten, L.: Densely connected convolutional networks. arXiv preprint arXiv:1608.06993 (2016)

  11. Kohavi, R.: A study of cross-validation and bootstrap for accuracy estimation and model selection, pp. 1137–1143. Morgan Kaufmann (1995)

    Google Scholar 

  12. Kolesnyk, V., Rocktäschel, T., Riedel, S.: Generating natural language inference chains. arXiv preprint arXiv:1606.01404 (2016)

  13. Kozlowski, R., McCoy, K.F., Vijay-Shanker, K.: Generation of single-sentence paraphrases from predicate/argument structure using lexico-grammatical resources. In: Proceedings of the Second International Workshop on Paraphrasing, vol. 16, pp. 1–8. Association for Computational Linguistics (2003)

    Google Scholar 

  14. Lavie, A., Agarwal, A.: Meteor: an automatic metric for MT evaluation with high levels of correlation with human judgments. In: Proceedings of the Second Workshop on Statistical Machine Translation, pp. 228–231. Association for Computational Linguistics (2007)

    Google Scholar 

  15. Li, X., Wu, X.: Constructing long short-term memory based deep recurrent neural networks for large vocabulary speech recognition. In: 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4520–4524. IEEE (2015)

    Google Scholar 

  16. Liu, C., Dahlmeier, D., Ng, H.T.: PEM: a paraphrase evaluation metric exploiting parallel texts. In: Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pp. 923–932. Association for Computational Linguistics (2010)

    Google Scholar 

  17. Madnani, N., Dorr, B.J.: Generating phrasal and sentential paraphrases: a survey of data-driven methods. Comput. Linguist. 36(3), 341–387 (2010)

    Article  MathSciNet  Google Scholar 

  18. McKeown, K.R.: Paraphrasing questions using given and new information. Comput. Linguist. 9(1), 1–10 (1983)

    MathSciNet  Google Scholar 

  19. Manning, C.D., Luong, M.-T., Pham, H.: Effective approaches to attention-based neural machine translation. CoRR abs/1508.0402 (2017)

    Google Scholar 

  20. Nguyen, N.K., Le, A.-C., Pham, H.T.: Deep bi-directional long short-term memory neural networks for sentiment analysis of social data. In: Huynh, V.-N., Inuiguchi, M., Le, B., Le, B.N., Denoeux, T. (eds.) IUKM 2016. LNCS (LNAI), vol. 9978, pp. 255–268. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-49046-5_22

    Chapter  Google Scholar 

  21. Papineni, K., Roukos, S., Ward, T., Zhu, W.-J.: Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pp. 311–318. Association for Computational Linguistics (2002)

    Google Scholar 

  22. Pascanu, R., Gulcehre, C., Cho, K., Bengio, Y.: How to construct deep recurrent neural networks. arXiv preprint arXiv:1312.6026 (2013)

  23. Pavlick, E., Rastogi, P., Ganitkevitch, J., Van Durme, B., Callison-Burch, C.: PPDB 2.0: Better paraphrase ranking, fine-grained entailment relations, word embeddings, and style classification (2015)

    Google Scholar 

  24. Prakash, A., Hasan, S.A., Lee, K., Datla, V., Qadir, A., Liu, J., Farri, O.: Neural paraphrase generation with stacked residual LSTM networks. arXiv preprint arXiv:1610.03098 (2016)

  25. Rus, V., Lintean, M.: A comparison of greedy and optimal assessment of natural language student input using word-to-word similarity metrics. In: Proceedings of the Seventh Workshop on Building Educational Applications Using NLP, pp. 157–162. Association for Computational Linguistics (2012)

    Google Scholar 

  26. Serban, I.V., Klinger, T., Tesauro, G., Talamadupula, K., Zhou, B., Bengio, Y., Courville, A.: Multiresolution recurrent neural networks: An application to dialogue response generation. arXiv preprint arXiv:1606.00776 (2016)

  27. Snover, M., Dorr, B., Schwartz, R., Micciulla, L., Makhoul, J.: A study of translation edit rate with targeted human annotation. In: Proceedings of Association for Machine Translation in the Americas, vol. 200 (2006)

    Google Scholar 

  28. Socher, R., Huang, E.H., Pennington, J., Ng, A.Y., Manning, C.D.: Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In: NIPS, vol. 24, pp. 801–809 (2011)

    Google Scholar 

  29. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  30. Sundermeyer, M., Schlüter, R., Ney, H.: LSTM neural networks for language modeling. In: Interspeech, pp. 194–197 (2012)

    Google Scholar 

  31. Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Advances in Neural Information Processing Systems, pp. 3104–3112 (2014)

    Google Scholar 

  32. Vinyals, O., Kaiser, Ł., Koo, T., Petrov, S., Sutskever, I., Hinton, G.: Grammar as a foreign language. In: Advances in Neural Information Processing Systems, pp. 2773–2781 (2015)

    Google Scholar 

  33. Wieting, J., Bansal, M., Gimpel, K., Livescu, K., Roth, D.: From paraphrase database to compositional paraphrase model and back. arXiv preprint arXiv:1506.03487 (2015)

  34. Wu, Y., Schuster, M., Chen, Z., Le, Q.V., Norouzi, M., Macherey, W., Krikun, M., Cao, Y., Gao, Q., Macherey, K., et al.: Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 (2016)

  35. Wubben, S., Van Den Bosch, A., Krahmer, E.: Paraphrase generation as monolingual translation: data and evaluation. In: Proceedings of the 6th International Natural Language Generation Conference, pp. 203–207. Association for Computational Linguistics (2010)

    Google Scholar 

  36. Zhao, S., Lan, X., Liu, T., Li, S.: Application-driven statistical paraphrase generation. In: Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, vol. 2, pp. 834–842. Association for Computational Linguistics (2009)

    Google Scholar 

  37. Zhao, S., Niu, C., Zhou, M., Liu, T., Li, S.: Combining multiple resources to improve SMT-based paraphrasing model. In: ACL, pp. 1021–1029 (2008)

    Google Scholar 

  38. Zhao, S., Wang, H., Lan, X., Liu, T.: Leveraging multiple MT engines for paraphrase generation. In: Proceedings of the 23rd International Conference on Computational Linguistics, pp. 1326–1334. Association for Computational Linguistics (2010)

    Google Scholar 

Download references

Acknowledgement

This paper is supported by The Vietnam National Foundation for Science and Technology Development (NAFOSTED) under grant number 102.01-2014.22.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anh-Cuong Le .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Nguyen-Ngoc, K., Le, AC., Nguyen, VH. (2018). An Attention-Based Long-Short-Term-Memory Model for Paraphrase Generation. In: Huynh, VN., Inuiguchi, M., Tran, D., Denoeux, T. (eds) Integrated Uncertainty in Knowledge Modelling and Decision Making. IUKM 2018. Lecture Notes in Computer Science(), vol 10758. Springer, Cham. https://doi.org/10.1007/978-3-319-75429-1_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-75429-1_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-75428-4

  • Online ISBN: 978-3-319-75429-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics