Skip to main content

Deep Learning Trends and Inspired Systems in Natural Language Processing

  • Chapter
  • First Online:
Data Science in Societal Applications

Part of the book series: Studies in Big Data ((SBD,volume 114))

Abstract

Text data is rapidly becoming a commonplace entity. Social media forms a nebula of such data that is easily accessible to common people and researchers. For corporate and other businesses, its surveys and company mails that provide them with the data. So it is not a surprise for the field of Natural Language Processing to witness a consistent rise in research and insights and more specifically in Natural Language Understanding (NLU) by precepting syntax, structure and sentences together. Recent advancements in representation learning methodologies have also unlocked understanding of text greatly by exploiting a common and overlooked point of view, i.e., attention. This research chapter will explore the recent trends in deep learning and the inspired systems in this area by charting the insights and inspiration that have come to build our current state-of-the-art models and architecture. This work also presents a summary of similarities and contrasts of various such models to conclude on this evolution of deep learning in NLP.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., Kuksa, P.: Natural language processing (almost) from scratch. J. Mach. Learn. Res. 12, 2493–2537 (2011)

    Google Scholar 

  2. Elman, J.L.: Distributed representations, simple recurrent networks, and grammatical structure. Mach. Learn. 7(2–3), 195–225 (1991)

    Google Scholar 

  3. Glorot, X., Bordes, A., Bengio, Y.: Domain adaptation for large-scale sentiment classification: a deep learning approach. In: Proceedings of the 28th International Conference on Machine Learning (ICML-11), 2011, pp. 513–520

    Google Scholar 

  4. Hermann, K.M., Blunsom, P.: The role of syntax in vector space models of compositional semantics. In: Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics (2013)

    Google Scholar 

  5. Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent Dirichlet allocation. J. Mach. Learn. Res. 3, 993–1022 (2003)

    Google Scholar 

  6. Collobert, R., Weston, J.: A unified architecture for natural language processing: deep neural networks with multitask learning. In: Proceedings of the 25th International Conference on Machine Learning. ACM, pp. 160–167 (2008)

    Google Scholar 

  7. Bengio, Y., Ducharme, R., Vincent, P., Jauvin, C.: A neural probabilistic language model. J. Mach. Learn. Res. 3, 1137–1155 (2003)

    Google Scholar 

  8. Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, pp. 3111–3119 (2013)

    Google Scholar 

  9. Gittens, A., Achlioptas, D., Mahoney, M.W.: Skip-gram-zipf+ uniform= vector additivity. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), vol. 1, 2017, pp. 69–76

    Google Scholar 

  10. Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation. In: EMNLP, vol. 14, 2014, pp. 1532–1543

    Google Scholar 

  11. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space (2013). arXiv:1301.3781

  12. Labutov, I., Lipson, H.: Re-embedding words. ACL (2), 489–493 (2013)

    Google Scholar 

  13. Kim, Y., Jernite, Y., Sontag, D., Rush, A.M.: Character-aware neural language models. In: AAAI, 2016, pp. 2741–2749

    Google Scholar 

  14. Dos Santos, C.N., Gatti, M.: Deep convolutional neural networks for sentiment analysis of short texts. In: COLING, 2014, pp. 69–78

    Google Scholar 

  15. Dos Santos, C.N., Guimaraes, V.: Boosting named entity recognition with neural character embeddings (2015). arXiv:1505.05008

  16. Santos, C.N., Zadrozny, B.: Learning character-level representations for part-of-speech tagging. In: Proceedings of the 31st International Conference on Machine Learning (ICML-14), 2014, pp. 1818–1826

    Google Scholar 

  17. Herbelot, A., Baroni, M.: High-risk learning: acquiring new word vectors from tiny data (2017). arXiv:1707.06556

  18. Pinter, Y., Guthrie, R., Eisenstein, J.: Mimicking word embeddings using subword rnns (2017). arXiv:1707.06961

  19. Lucy, L., Gauthier, J.: Are distributional representations ready for the real world? Evaluating word vectors for grounded perceptual meaning (2017). arXiv:1705.11168

  20. Wang, B., Zhao, D., Lioma, C., Li, Q., Zhang, P., Simonsen, J.: Encoding word order in complex embeddings (2019).

    Google Scholar 

  21. Gehring, J., Auli, M., Grangier, D., Yarats, D., Dauphin, Y.N.: Convolutional sequence to sequence learning. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 1243–1252. JMLR (2017)

    Google Scholar 

  22. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)

    Google Scholar 

  23. Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling (2014). arXiv:1412.3555

  24. Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using rnn encoder-decoder for statistical machine translation (2014). arXiv:1406.1078

  25. Yin, W., Kann, K., Yu, M., Schütze, H.: Comparative study of cnn and rnn for natural language processing (2017). arXiv:1702.01923

  26. Kalchbrenner, N., Grefenstette, E., Blunsom, P.: A convolutional neural network for modelling sentences. In: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, June 2014

    Google Scholar 

  27. Kim, Y.: Convolutional neural networks for sentence classification (2014). arXiv:1408.5882

  28. Koehn, P., Och, F.J., Marcu, D.: Statistical phrase-based translation. In: Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology—vol. 1, NAACL ’03, pp. 48–54, Stroudsburg, PA, USA. Association for Computational Linguistics (2003)

    Google Scholar 

  29. Kalchbrenner, N., Blunsom, P.: Recurrent continuous translation models. In: Proceedings of the ACL Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1700–1709. Association for Computational Linguistics (2013)

    Google Scholar 

  30. Sutskever, I., Vinyals, O., Le, Q.: Sequence to sequence learning with neural networks. In: Advances in Neural Information Processing Systems (NIPS 2014)

    Google Scholar 

  31. Cho, K., van Merrienboer, B., Bahdanau, D., Bengio, Y.: On the properties of neural machine translation: encoder–decoder approaches. In: Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation (2014)

    Google Scholar 

  32. Bahdanau, D. et al.: Neural machine translation by jointly learning to align and translate. CoRR (2015). abs/1409.0473

    Google Scholar 

  33. Shazeer, N., Mirhoseini, A., Maziarz, K., Davis, A., Le, Q., Hinton, G., Dean, J.: Outrageously large neural networks: the sparsely-gated mixture-of-experts layer (2017). arXiv:1701.06538

  34. Kuchaiev, O., Ginsburg, B.: Factorization tricks for LSTM networks (2017). arXiv:1703.10722

  35. Vaswani, A. et al.: Attention is all you need (2017). ArXiv abs/1706.03762

    Google Scholar 

  36. Devlin, J. et al.: BERT: pre-training of deep bidirectional transformers for language understanding. NAACL (2019)

    Google Scholar 

  37. Radford, A., Narasimhan, K., Salimans, T., Sutskever, I.: Improving language understanding with unsupervised learning. Technical report, OpenAI (2018)

    Google Scholar 

  38. Taylor, W.L.: Cloze procedure: a new tool for measuring readability. J. Bull. 30(4), 415–433 (1953)

    Google Scholar 

  39. Jernite, Y., Bowman, S.R., Sontag, D.: Discourse-based objectives for fast unsupervised sentence representation learning. CoRR (2017). abs/1705.00557

    Google Scholar 

  40. Kitaev, N. et al.: Reformer: The Efficient Transformer (2020). ArXiv abs/2001.04451 (2020)

    Google Scholar 

  41. Lan, Z. et al.: ALBERT: A Lite BERT for Self-supervised Learning of Language Representations (2020). ArXiv abs/1909.11942

    Google Scholar 

  42. Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R., Le, Q.V.: XLNet: generalized autoregressive pretraining for language understanding (2019). arXiv:1906.08237

  43. Ranzato, M., Chopra, S., Auli, M., Zaremba, W.: Sequence level training with recurrent neural networks (2015). arXiv:1511.06732

  44. Li, J., Monroe, W., Ritter, A., Galley, M., Gao, J., Jurafsky, D.: Deep reinforcement learning for dialogue generation (2016). arXiv:1606.01541

  45. Kingma, D.P., Welling, M.: Auto-encoding variational bayes (2013). arXiv:1312.6114

  46. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)

    Google Scholar 

  47. Bowman, S.R., Vilnis, L., Vinyals, O., Dai, A.M., Jozefowicz, R., Bengio, S.: Generating sentences from a continuous space (2015). arXiv:1511.06349

  48. Zhang, Y., Gan, Z., Carin, L.: Generating text via adversarial training. In: NIPS Workshop on Adversarial Training (2016)

    Google Scholar 

  49. Yu, L., Zhang, W., Wang, J., Yu, Y.: Seqgan: sequence generative adversarial nets with policy gradient. In: Thirty-First AAAI Conference on Artificial Intelligence (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Manjusha Pandey .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Ansari, A.A., Rautaray, S.S., Pandey, M. (2022). Deep Learning Trends and Inspired Systems in Natural Language Processing. In: Rautaray, S.S., Pandey, M., Nguyen, N.G. (eds) Data Science in Societal Applications. Studies in Big Data, vol 114. Springer, Singapore. https://doi.org/10.1007/978-981-19-5154-1_5

Download citation

  • DOI: https://doi.org/10.1007/978-981-19-5154-1_5

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-19-5153-4

  • Online ISBN: 978-981-19-5154-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics