Deep Interactive Evolution

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10783)


This paper describes an approach that combines generative adversarial networks (GANs) with interactive evolutionary computation (IEC). While GANs can be trained to produce lifelike images, they are normally sampled randomly from the learned distribution, providing limited control over the resulting output. On the other hand, interactive evolution has shown promise in creating various artifacts such as images, music and 3D objects, but traditionally relies on a hand-designed evolvable representation of the target domain. The main insight in this paper is that a GAN trained on a specific target domain can act as a compact and robust genotype-to-phenotype mapping (i.e. most produced phenotypes do resemble valid domain artifacts). Once such a GAN is trained, the latent vector given as input to the GAN’s generator network can be put under evolutionary control, allowing controllable and high-quality image generation. In this paper, we demonstrate the advantage of this novel approach through a user study in which participants were able to evolve images that strongly resemble specific target images.


Interactive Development Generative Adversarial Networks (GAN) Interactive Evolutionary Computation (IEC) User Fatigue Picbreeder 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Aubry, M., Maturana, D., Efros, A., Russell, B., Sivic, J.: Seeing 3D chairs: exemplar part-based 2D–3D alignment using a large dataset of cad models. In: CVPR (2014)Google Scholar
  2. 2.
    Berthelot, D., Schumm, T., Metz, L.: Began: boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717 (2017)
  3. 3.
    Bongard, J.C., Hornby, G.S.: Combining fitness-based search and user modeling in evolutionary robotics. In: Proceeding of the 15th Annual conference on Genetic and Evolutionary Computation Conference - GECCO 2013, pp. 159–166 (2013)Google Scholar
  4. 4.
    Dawkins, R.: The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe Without Design. WW Norton & Company, New York (1986)Google Scholar
  5. 5.
    Elgammal, A., Liu, B., Elhoseiny, M., Mazzone, M.: CAN: creative adversarial networks, generating “art” by learning about styles and deviating from style norms. arXiv preprint arXiv:1706.07068 (2017)
  6. 6.
    Goodfellow, I.: NIPS 2016 tutorial: generative adversarial networks. arXiv preprint arXiv:1701.00160 (2016)
  7. 7.
    Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)Google Scholar
  8. 8.
    Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.: Improved training of Wasserstein GANs. arXiv preprint arXiv:1704.00028 (2017)
  9. 9.
    Hastings, E.J., Guha, R.K., Stanley, K.O.: Automatic content generation in the galactic arms race video game. IEEE Trans. Computat. Intell. AI Games 1(4), 245–263 (2009)CrossRefGoogle Scholar
  10. 10.
    Hoover, A.K., Szerlip, P.A., Stanley, K.O.: Interactively evolving harmonies through functional scaffolding. In: Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation, pp. 387–394. ACM (2011)Google Scholar
  11. 11.
    Kamalian, R., Zhang, Y., Takagi, H., Agogino, A.: Reduced human fatigue interactive evolutionary computation for micromachine design. In: Proceedings of 2005 International Conference on Machine Learning and Cybernetics, vol. 9, pp. 5666–5671, August 2005Google Scholar
  12. 12.
    Kim, Y., Zhang, K., Rush, A.M., LeCun, Y., et al.: Adversarially regularized autoencoders for generating discrete structures. arXiv preprint arXiv:1706.04223 (2017)
  13. 13.
    Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proceedings of International Conference on Computer Vision (ICCV), December 2015Google Scholar
  14. 14.
    Pallez, D., Collard, P., Baccino, T., Dumercy, L.: Eye-tracking evolutionary algorithm to minimize user fatigue in IEC applied to interactive One-Max problem. In: Proceedings of the 2007 GECCO Conference Companion on Genetic and Evolutionary Computation (2007)Google Scholar
  15. 15.
    Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015)
  16. 16.
    Risi, S., Lehman, J., D’Ambrosio, D.B., Hall, R., Stanley, K.O.: Petalz: search-based procedural content generation for the casual gamer. IEEE Tran. Comput. Intell. AI Games 8(3), 244–255 (2016)CrossRefGoogle Scholar
  17. 17.
    Secretan, J., Beato, N., Ambrosio, D.B.D., Rodriguez, A., Campbell, A., Stanley, K.O.: Picbreeder: evolving pictures collaboratively online. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1759–1768. ACM (2008)Google Scholar
  18. 18.
    Sims, K.: Artificial evolution for computer graphics, vol. 25. ACM (1991)Google Scholar
  19. 19.
    Takagi, H.: Interactive evolutionary computation: fusion of the capabilities of ec optimization and human evaluation. Proc. IEEE 89(9), 1275–1296 (2001)CrossRefGoogle Scholar
  20. 20.
    Todd, S., Latham, W.: Evolutionary Art and Computers. Academic Press, London (1992)zbMATHGoogle Scholar
  21. 21.
    Woolley, B.G., Stanley, K.O.: On the deleterious effects of a priori objectives on evolution and representation. In: Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation, pp. 957–964. ACM (2011)Google Scholar
  22. 22.
    Yu, A., Grauman, K.: Fine-grained visual comparisons with local learning. In: Computer Vision and Pattern Recognition (CVPR), June 2014Google Scholar
  23. 23.
    Zhang, J., Taarnby, R., Liapis, A., Risi, S.: DrawCompileEvolve: sparking interactive evolutionary art with human creations. In: Johnson, C., Carballal, A., Correia, J. (eds.) EvoMUSART 2015. LNCS, vol. 9027, pp. 261–273. Springer, Cham (2015). Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.New York UniversityNew YorkUSA
  2. 2.IT University of CopenhagenCopenhagenDenmark
  3. 3.Beijing University of Posts and TelecommunicationsBeijingChina

Personalised recommendations