Advertisement

CodeeGAN: Code Generation via Adversarial Training

  • Youqiang Deng
  • Cai FuEmail author
  • Yang Li
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 1123)

Abstract

The automatic generation of code is an important research problem in the field of Machine Learning. Generative Adversarial Network (GAN) exhibits a powerful ability in image generation. However, generating code via GAN is so far an unexplored research area, the reason of which is the discrete output of language model hinders the application of gradient-based GANs. In this paper, we propose a model called CodeeGAN to generate code via adversarial training. First, we adopt Policy Gradient method in Reinforcement Learning (RL) to solve the problem of discrete data. Data generated by the generative model is discrete data which makes the generative model cannot be adjusted by gradient descent. Second, we use Monte Carlo Tree Search (MCTS) to create our rollout network for evaluating the loss of generated tokens. Based on the two mechanisms above, we create CodeeGAN model to generate code via adversarial training. We evaluate the model with datasets from four different platforms. Our model shows a better performance than other existing works and proves that code generation via adversarial training is an advanced efficient method.

Keywords

GAN Code generation CNN LSTM Policy Gradient Monte Carlo Tree Search 

References

  1. 1.
    Beltramelli, T.: pix2code: Generating Code from a Graphical User Interface Screenshot, arXiv preprint arXiv:1705.07962 (2017)
  2. 2.
    Wallner, E.: Screenshot-to-code (2017). https://github.com/emilwallner/Screenshot-to-code
  3. 3.
    Wilkins, B., Gold, J., Owens, G., Chen, D., Smith, L.: Sketching Interfaces: Generating code from low fidelity wireframes (2017). https://airbnb.design/sketching-interfaces
  4. 4.
    Deng, Y., Kanervisto, A., Ling, J., Rush, A.M.: Image-to-markup generation with coarse-to-fine attention. In: ICML (1997)Google Scholar
  5. 5.
    Kumar, A.: Sketch-code (2018). https://github.com/ashnkumar/sketch-code
  6. 6.
  7. 7.
    Goodfellow, I.J., et al.: Generative Adversarial Nets, arXiv preprint arXiv:1406.2661 (2014)
  8. 8.
    Huszar, F.: How (not) to train your generative model: scheduled sampling, likelihood, adversary? arXiv preprint arXiv:1511.05101 (2015)
  9. 9.
    Goodfellow, I.J.: Generative adversarial networks for text (2016). http://goo.gl/Wg9DR7
  10. 10.
    Yu, L., Zhang, W., Wang, J., Yu, Y.: SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient, arXiv preprint arXiv:1609.05473 (2017)
  11. 11.
    Bachman, P., Precup, D.: Data generation as sequential decision making. In: NIPS, pp. 3249–3257 (2015) Google Scholar
  12. 12.
    Bahdanau, D., Brakel, P., Xu, K., et al.: An actor-critic algorithm for sequence prediction, arXiv preprint arXiv:1607.07086 (2016)
  13. 13.
    Sutton, R.S., Barto, A.G.: Finite Markov decision processes. Reinforcement Learn. 47–71 (2018) Google Scholar
  14. 14.
    Vesely, K., Ghoshal, A., Burget, L., Povey, D.: Sequence-discriminative training of deep neural networks. In: INTERSPEECH, pp. 2345–2349 (2013)Google Scholar
  15. 15.
    Kim, Y.: Convolutional neural networks for sentence classification, arXiv preprint arXiv:1408.5882 (2014)
  16. 16.
    Lai, S., Xu, L., Liu, K., Zhao, J.: Recurrent convolutional neural networks for text classification. In: AAAI, pp. 2267–2273 (2015)Google Scholar
  17. 17.
    Zhang, X., LeCun, Y.: Text understanding from scratch, arXiv preprint arXiv:1502.01710 (2015)
  18. 18.
    Williams, R.J.: Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn. 8(3–4), 229–256 (1992)zbMATHGoogle Scholar
  19. 19.
    Balog, M., Gaunt, A.L., Brockschmidt, M., Nowozin, S., Tarlow, D.: Deepcoder: learning to write programs, arXiv preprint arXiv:1611.01989 (2016)
  20. 20.
    Kusner, M., Lobato, J.: GANS for Sequences of Discrete Elements with the Gumbel-softmax Distribution, arXiv preprint arXiv:1611.04051 (2016)
  21. 21.
    Zhang, Y., Gan, Z., Carin, L.: Generating Text via Adversarial Training, arXiv preprint arXiv:1725.07232 (2017)
  22. 22.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  23. 23.
    Guo, J., Lu, S., Cai, H., Zhang, W., Yu, Y., Wang, J.: Long Text Generation via Adversarial Training with Leaked Information, arXiv preprint arXiv:1709.08624 (2017)
  24. 24.
    Li, J., Monroe, W., Shi, T., Jean, S., Ritter, A., Jurafsky, D.: Adversarial Learning for Neural Dialogue Generation, arXiv preprint arXiv:1701.06547 (2017)
  25. 25.
    Donahue, J., et al.: Long-term recurrent convolutional networks for visual recognition and description. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2625–2634 (2015)Google Scholar
  26. 26.
    Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., Lee, H.: Generative adversarial text to image synthesis. In: Proceedings of The 33rd International Conference on Machine Learning, vol. 3 (2016)Google Scholar
  27. 27.
    Zhang, H., et al.: Stackgan: text to photo-realistic image synthesis with stacked generative adversarial networks. arXiv preprint arXiv:1612.03242 (2016)
  28. 28.
    Shetty, R., Rohrbach, M., Hendricks, L.A., Fritz, M., Schiele, B.: Speaking the same language: Matching machine to human captions by adversarial training. arXiv preprint arXiv:1703.10476 (2017)
  29. 29.
    Alrowaily, M., Alenezi, F., Lu, Z.: Effectiveness of machine learning based intrusion detection systems. In: Wang, G., Feng, J., Bhuiyan, M.Z.A., Lu, R. (eds.) SpaCCS 2019. LNCS, vol. 11611, pp. 277–288. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-24907-6_21CrossRefGoogle Scholar
  30. 30.
    Manavi, M., Zhang, Y.: A new intrusion detection system based on gated recurrent unit (GRU) and genetic algorithm. In: Wang, G., Feng, J., Bhuiyan, M.Z.A., Lu, R. (eds.) SpaCCS 2019. LNCS, vol. 11611, pp. 368–383. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-24907-6_28CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.Huazhong University of Science and TechnologyWuhanChina
  2. 2.Wuhan Maritime Communication Research InstituteWuhanChina

Personalised recommendations