Machine Translation from Natural Language to Code Using Long-Short Term Memory

  • K. M. Tahsin Hassan RahitEmail author
  • Rashidul Hasan Nabil
  • Md Hasibul Huq
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1069)


Making computer programming language more understandable and easy for the human is a longstanding problem. From assembly language to present day’s object-oriented programming, concepts came to make programming easier so that a programmer can focus on the logic and the architecture rather than the code and language itself. To go a step further in this journey of removing human-computer language barrier, this paper proposes machine learning approach using Recurrent Neural Network (RNN) and Long-Short Term Memory (LSTM) to convert human language into programming language code. The programmer will write expressions for codes in layman’s language, and the machine learning model will translate it to the targeted programming language. The proposed approach yields result with 74.40% accuracy. This can be further improved by incorporating additional techniques, which are also discussed in this paper.


Text to code Machine learning Machine translation NLP RNN LSTM 



We would like to thank Dr. Khandaker Tabin Hasan, Head of the Department of Computer Science, American International University-Bangladesh for his inspiration and encouragement in all of our research works. Also, thanks to Future Technology Conference - 2019 committee for partially supporting us to join the conference and one of our colleague - Faheem Abrar, Software Developer for his thorough review and comments on this research work and supporting us by providing fund.


  1. 1.
    Allamanis, M., Barr, E.T., Devanbu, P., Sutton, C.: A survey of machine learning for big code and naturalness. CoRR abs/1709.0 (2017).
  2. 2.
    Birch, A., Osborne, M., Koehn, P.: Predicting success in machine translation. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing - EMNLP 2008, pp. 745–754, October 2008.
  3. 3.
    George, D., Priyanka Girase, N., Mahesh Gupta, N., Prachi Gupta, N., Aakanksha Sharma, N., Navi, V.: Programming language inter-conversion. Int. J. Comput. Appl. 1(20), 975–8887 (2010). CrossRefGoogle Scholar
  4. 4.
    Gvero, T., Kuncak, V., Gvero, T., Kuncak, V.: Synthesizing Java expressions from free-form queries. In: Proceedings of the 2015 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications - OOPSLA 2015, vol. 50, pp. 416–432. ACM Press, New York (2015).
  5. 5.
    Klein, G., Kim, Y., Deng, Y., Senellart, J., Rush, A.M., Seas, H.: OpenNMT: open-source toolkit for neural machine translation. In: Proceedings of the ACL, pp. 67–72 (2017).
  6. 6.
    Knuth, D.E.: Literate programming. Comput. J. 27(2), 97–112 (1984). Scholar
  7. 7.
    Lei, T., Long, F., Barzilay, R., Rinard, M.C.: From natural language specifications to program input parsers. In: Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics 1, pp. 1294–1303 (2013).
  8. 8.
    Mihalcea, R., Liu, H., Lieberman, H.: NLP (natural language processing) for NLP (natural language programming). In: Linguistics and Intelligent Text Processing, pp. 319–330 (2006). Scholar
  9. 9.
    Mikolov, T., Chen, K., Corrado, G., Dean, J.: Distributed representations of words and phrases and their compositionality. CrossRef Listing of Deleted DOIs 1, pp. 1–9 (2000).
  10. 10.
    Oda, Y., Fudaba, H., Neubig, G., Hata, H., Sakti, S., Toda, T., Nakamura, S.: Learning to generate pseudo-code from source code using statistical machine translation. In: 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE), pp. 574–584. IEEE, November 2015.
  11. 11.
    Quirk, C., Mooney, R., Galley, M.: Language to code: learning semantic parsers for if-this-then-that recipes. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 878–888. Association for Computational Linguistics, Stroudsburg (2015).
  12. 12.
    Yin, P., Neubig, G.: A syntactic neural model for general-purpose code generation. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (2017).

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • K. M. Tahsin Hassan Rahit
    • 1
    • 2
    Email author
  • Rashidul Hasan Nabil
    • 3
    • 4
  • Md Hasibul Huq
    • 5
  1. 1.Institute of Computer ScienceBangladesh Atomic Energy CommissionDhakaBangladesh
  2. 2.Department of Bio-chemistry and Molecular BiologyUniversity of CalgaryCalgaryCanada
  3. 3.Department of Computer ScienceAmerican International University-BangladeshDhakaBangladesh
  4. 4.Department of Computer Science and EngineeringCity UniversityDhakaBangladesh
  5. 5.Department of Computer Science and Software EngineeringConcordia UniversityMontrealCanada

Personalised recommendations