Skip to main content

The Recurrent Neural Network for Program Synthesis

  • Conference paper
  • First Online:
Digital Technologies and Applications (ICDTA 2021)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 211))

Included in the following conference series:

Abstract

The program synthesis is a new and ambitious field consists of generating from short and natural descriptions a source code. The program synthesis is rapidly becoming a popular research problem, interesting potential has been offered by using artificial intelligence to support new tools in almost all areas of program analysis and software engineering. The goal of our long-term research project is to enable and facilitate any user to create an application from a description written in natural language by specifying the need for a complete system. This involves carrying out a study of the user's needs, the design and implementation of an intelligent system allowing the automatic realization of an Informatic project (architecture, initialization scripts, configuration, etc.) expressed in natural language. We propose a recurrent neural network with LSTM cells and we publish a dataset of more than 145 765 questions and their best answer, especially in Java. The performance of the proposed model is very interesting such that the success rate is 94%.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 259.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 329.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Gulwani S, Polozov O, Singh R (2017) Program synthesis. Found Trends Program Lang 4(1–2):1–119

    Google Scholar 

  2. Berrajaa A, Ettifouri EH, Dahhane W, Bouchentouf T, Rahmoun M (2019) Nl2code: a corpus and semantic parser for natural language to code. In: International conference on smart information and communication technologies. Springer, Cham, pp 592–599

    Google Scholar 

  3. Jozefowicz R, Vinyals O, Schuster M, Shazeer N, Wu Y (2016) Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410

  4. Ling W, Grefenstette E, Hermann KM, Kočiský T, Senior A, Wang F, Blunsom P (2016) Latent predictor networks for code generation. arXiv preprint arXiv:1603.06744

  5. Manna Z, Waldinger RJ (1971) Toward automatic program synthesis. Commun ACM 14(3):151–165

    Article  Google Scholar 

  6. Lau T (2001) Programming by demonstration: a machine learning approach. Doctoral dissertation

    Google Scholar 

  7. Banarescu L, Bonial C, Cai S, Georgescu M, Griffitt K, Hermjakob U, Knight K, Koehn P, Palmer M, Schneider N (2013) Abstract meaning representation for sembanking. In: Proceedings of the 7th linguistic annotation workshop and interoperability with discourse, pp 178–186

    Google Scholar 

  8. Quirk C, Mooney R, Galley M (2015) Language to code: learning semantic parsers for if-this-then-that recipes. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing, vol 1, pp 878–888

    Google Scholar 

  9. Berant J, Chou A, Frostig R, Liang P (2013) Semantic parsing on freebase from question-answer pairs. In: Proceedings of the 2013 conference on empirical methods in natural language processing, pp 1533–1544

    Google Scholar 

  10. Bengio Y, Ducharme R, Vincent P, Jauvin C (2003) A neural probabilistic language model. J Mach Learn Res 3:1137–1155

    MATH  Google Scholar 

  11. Sutskever I, Vinyals O, Le QV (2014) Sequence to sequence learning with neural networks. In: Advances in neural information processing systems, pp 3104–3112

    Google Scholar 

  12. Bahdanau D, Cho K, Bengio Y (2014) Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473

  13. Wu Y, Schuster M, Chen Z, Le QV, Norouzi M, Macherey W, Krikun M, Cao Y, Gao Q, Macherey K, Klingner J (2016) Google’s neural machine translation system: bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144

  14. Chen H, Huang S, Chiang D, Chen J (2017) Improved neural machine translation with a syntax-aware encoder and decoder. arXiv preprint arXiv:1707.05436

  15. Xie P, Xing E (2017) A constituent-centric neural architecture for reading comprehension. In: Proceedings of the 55th annual meeting of the association for computational linguistics (Volume 1: Long Papers), vol 1, pp 1405–1414

    Google Scholar 

  16. Goller C, Kuchler A (1996) Learning task-dependent distributed representations by backpropagation through structure. In: Proceedings of international conference on neural networks (ICNN 1996), vol 1, pp 347–352

    Google Scholar 

  17. Socher R, Lin CC, Manning C, Ng AY (2011) Parsing natural scenes and natural language with recursive neural networks. In: Proceedings of the 28th international conference on machine learning (ICML 2011), pp 129–136

    Google Scholar 

  18. Dyer C, Kuncoro A, Ballesteros M, Smith NA (2016) Recurrent neural network grammars. arXiv preprint arXiv:1602.07776

  19. Tai KS, Socher R, Manning CD (2015) Improved semantic representations from treestructured long short-term memory networks. arXiv preprint arXiv:1503.00075

  20. Zhu C, Qiu X, Chen X, Huang X (2015) A re-ranking model for dependency parser with recursive convolutional neural network. arXiv preprint arXiv:1505.05667

  21. Scott R, de Freitas N (2016) Neural programmer-interpreters. In: Proceedings of the international conference on learning representations (ICLR)

    Google Scholar 

  22. Neelakantan A, Le QV, Sutskever I (2015) Neural programmer: inducing latent programs with gradient descent. arXiv preprint arXiv:1511.04834

  23. Kant N (2018) Recent advances in neural program synthesis. arXiv preprint arXiv:1802.02353

  24. Gaunt AL, Brockschmidt M, Singh R, Kushman N, Kohli P, Taylor J, Tarlow D (2016) Terpret: a probabilistic programming language for program induction. arXiv preprint arXiv:1608.04428

  25. Gulwani S, Marron M (2014) Nlyze: interactive programming by natural language for spreadsheet data analysis and manipulation. In: Proceedings of the 2014 ACM SIGMOD international conference on management of data. ACM, pp 803–814

    Google Scholar 

  26. Gvero T, Kuncak V (2015) Synthesizing java expressions from free-form queries. Acm Sigplan Notices 50(10):416–432

    Article  Google Scholar 

  27. Lin XV, Wang C, Zettlemoyer L, Ernst MD (2018) Nl2bash: a corpus and semantic parser for natural language interface to the linux operating system. arXiv preprint arXiv:1802.08979

  28. Kushman N, Barzilay R (2013) Using semantic unification to generate regular expressions from natural language. In: Proceedings of the 2013 conference of the North American chapter of the association for computational linguistics: human language technologies, pp 826–836

    Google Scholar 

  29. Zhong V, Xiong C, Socher R (2017) Seq2sql: generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103

  30. Alon U, Zilberstein M, Levy O, Yahav E (2019) code2vec: learning distributed representations of code. Proc ACM Program Lang 3(POPL):40

    Google Scholar 

  31. Gao S, Chen C, Xing Z, Ma Y, Song W, Lin SW (2019) A neural model for method name generation from functional description. In: 2019 IEEE 26th international conference on software analysis, evolution and reengineering (SANER). IEEE, pp 414–421

    Google Scholar 

  32. Parvez MR, Chakraborty S, Ray B, Chang KW (2018) Building language models for text with named entities. arXiv preprint arXiv:1805.04836

  33. StackOverflow (2008–2016) Stacksample: 10 percent of stack overflow questions and answers. https://www.kaggle.com/stackoverflow/stacksample

  34. Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780

    Google Scholar 

  35. Buckman J, Roy A, Raffel C, Goodfellow I (2018) Thermometer encoding: one hot way to resist adversarial examples

    Google Scholar 

  36. Bishop CM (1995) Neural networks for pattern recognition. Oxford University Press, Oxford

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Achraf Berrajaa .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Berrajaa, A., Ettifouri, E.H. (2021). The Recurrent Neural Network for Program Synthesis. In: Motahhir, S., Bossoufi, B. (eds) Digital Technologies and Applications. ICDTA 2021. Lecture Notes in Networks and Systems, vol 211. Springer, Cham. https://doi.org/10.1007/978-3-030-73882-2_8

Download citation

Publish with us

Policies and ethics