Skip to main content

The Code Generation Method Based on Gated Attention and InterAction-LSTM

  • Conference paper
  • First Online:
Web Information Systems and Applications (WISA 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12999))

Included in the following conference series:

Abstract

Code generation is an important research field of software engineering, aiming to reduce development costs and improve program quality. Nowadays, more and more researchers intend to implement code generation by natural language understanding. In this paper, we propose a generation method to convert natural language descriptions to the program code based on deep learning. We use an encoder-decoder model with gated attention mechanism. Here, the decoder is an InterAction-LSTM. The gated attention combines the previous decoding cell state with source representations to improve the limitation of invariant source representations. The decoder makes the information interact each other before putting them into the gate of the LSTM. The code generation is verified on two datasets, Conala and Django. Compared with known models, our model outperforms the baselines both in Accuracy and Bleu.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Gabel, M., Su, Z.: A study of the uniqueness of source code. In: SIGSOFT FSE, pp. 147–156 (2010)

    Google Scholar 

  2. Xia, X., Bao, L., Lo, D., Xing, Z., Hassan, A.E., Li, S.: Measuring program comprehension: a large-scale field study with professionals. IEEE Trans. Softw. Eng. 44, 951–976 (2017)

    Article  Google Scholar 

  3. Zettlemoyer, S.L., Collins, M.: Learning to map sentences to logical form: structured classification with probabilistic categorial grammars. In: Uncertainty in Artificial Intelligence, pp. 658–666 (2005)

    Google Scholar 

  4. HeYue: The design and research of automatic code generation in UML language. Ph.D. thesis, North China Electric Power University (2017)

    Google Scholar 

  5. Raghothaman, M., Wei, Y., Hamadi, Y.: SWIM: synthesizing what I mean: code search and idiomatic snippet synthesis. In: ICSE, pp. 357–367 (2016)

    Google Scholar 

  6. Hindle, A., Barr, T.E., Su, Z., Gabel, M., Devanbu, P.: On the naturalness of software. In: Proceedings of the 6th India Software Engineering Conference, pp. 61–61 (2013)

    Google Scholar 

  7. Ling, W., et al.: Latent predictor networks for code generation. In: ACL, pp. 599–609 (2016)

    Google Scholar 

  8. Mandal, S., Naskar, K.S.: Natural language programing with automatic code generation towards solving addition-subtraction word problems. In: ICON, pp. 146–154 (2017)

    Google Scholar 

  9. Joshi, K.A., Levy, S.L., Takahashi, M.: Tree adjunct grammars. J. Comput. Syst. Sci. 10, 136–163 (1975)

    Article  MathSciNet  Google Scholar 

  10. Yin, P., Neubig, G.: A syntactic neural model for general-purpose code generation. In: ACL, pp. 440–450 (2017)

    Google Scholar 

  11. Rabinovich, M., Stern, M., Klein, D.: Abstract syntax networks for code generation and semantic parsing. In: Meeting of the Association for Computational Linguistics, pp. 1139–1149 (2017)

    Google Scholar 

  12. Hayati, A.S., Olivier, R., Avvaru, P., Yin, P., Tomasic, A., Neubig, G.: Retrieval-based neural code generation. In: EMNLP, pp. 925–930 (2018)

    Google Scholar 

  13. Yin, P., Neubig, G.: TRANX: a transition-based neural abstract syntax parser for semantic parsing and code generation. In: EMNLP (Demonstration), pp. 7–12 (2018)

    Google Scholar 

  14. Yin, P., Neubig, G.: Reranking for neural semantic parsing. In: ACL, vol. 1, pp. 4553–4559 (2019)

    Google Scholar 

  15. Shi, M.: Research on code generation method using multi-source information. Ph.D. thesis, Nanjing Normal University (2020)

    Google Scholar 

  16. Zhu, Y., Zhang, Y., Yang, H., Wang, F.: GANCoder: an automatic natural language-to-programming language translation approach based on GAN. In: Tang, J., Kan, M.-Y., Zhao, D., Li, S., Zan, H. (eds.) NLPCC 2019. LNCS (LNAI), vol. 11839, pp. 529–539. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32236-6_48

    Chapter  Google Scholar 

  17. Sun, Z., Zhu, Q., Xiong, Y., Sun, Y., Mou, L., Zhang, L.: TreeGen: a tree-based transformer architecture for code generation. In: National Conference on Artificial Intelligence, pp. 8984–8991 (2020)

    Google Scholar 

  18. Dong, L., Lapata, M.: Language to logical form with neural attention. In: Meeting of the Association for Computational Linguistics, pp. 33–43 (2016)

    Google Scholar 

  19. Chorowski, J., Bahdanau, D., Serdyuk, D., Cho, K., Bengio, Y.: Attention-based models for speech recognition. In: Annual Conference on Neural Information Processing Systems, pp. 735–750 (2015)

    Google Scholar 

  20. Yang, F., Cui, R., Yi, Z., Zhao, Y.: Cross-language generative automatic summarization based on attention mechanism. In: Wang, G., Lin, X., Hendler, J., Song, W., Xu, Z., Liu, G. (eds.) WISA 2020. LNCS, vol. 12432, pp. 236–247. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-60029-7_22

    Chapter  Google Scholar 

  21. Wu, S., Zhang, D., Yang, N., Li, M., Zhou, M.: Sequence-to-dependency neural machine translation. In: ACL, pp. 698–707 (2017)

    Google Scholar 

  22. Oda, Y., et al.: Learning to generate pseudo-code from source code using statistical machine translation (t). In: Automated Software Engineering, pp. 574–584 (2015)

    Google Scholar 

  23. Yin, P., Deng, B., Chen, E., Vasilescu, B., Neubig, G.: Learning to mine aligned code and natural language pairs from stack overflow. In: ICSE 2018: 40th International Conference on Software Engineering, Gothenburg, Sweden, May 2018, pp. 476–486 (2018)

    Google Scholar 

  24. Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: BLEU: a method for automatic evaluation of machine translation. In: ACL 2002: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pp. 311–318 (2002)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Junhua Wu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, Y., Wu, J. (2021). The Code Generation Method Based on Gated Attention and InterAction-LSTM. In: Xing, C., Fu, X., Zhang, Y., Zhang, G., Borjigin, C. (eds) Web Information Systems and Applications. WISA 2021. Lecture Notes in Computer Science(), vol 12999. Springer, Cham. https://doi.org/10.1007/978-3-030-87571-8_47

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-87571-8_47

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-87570-1

  • Online ISBN: 978-3-030-87571-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics