Evolution of Images with Diversity and Constraints Using a Generative Adversarial Network

  • Aneta NeumannEmail author
  • Christo Pyromallis
  • Bradley Alexander
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11306)


Generative Adversarial Networks (GANs) are a machine learning approach that have the ability to generate novel images. Recent developments in deep learning have enabled a generation of compelling images using generative networks that encode images with lower-dimensional latent spaces. Nature-inspired optimisation methods has been used to generate new images. In this paper, we train GAN with aim of generating images that are created based on optimisation of feature scores in one or two dimensions. We use search in the latent space to generate images scoring high or low values feature measures and compare different feature measures. Our approach successfully generate image variations with two datasets, faces and butterflies. The work gives insights on how feature measures promote diversity of images and how the different measures interact.


  1. 1.
  2. 2.
    McCormack, J., d’Inverno, M. (eds.): Computers and Creativity. Springer, Heidelberg (2012). Scholar
  3. 3.
    Alexander, B., Kortman, J., Neumann, A.: Evolution of artistic image variants through feature based diversity optimisation. In: GECCO, pp. 171–178 (2017)Google Scholar
  4. 4.
    Correia, J., Machado, P., Romero, J., Carballal, A.: Evolving figurative images using expression-based evolutionary art. In: ICCC, p. 24 (2013)Google Scholar
  5. 5.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: CVPR, pp. 248–255. IEEE Computer Society (2009)Google Scholar
  6. 6.
    Dosovitskiy, A., Brox, T.: Generating images with perceptual similarity metrics based on deep networks. In: NIPS, pp. 658–666 (2016)Google Scholar
  7. 7.
    Dosovitskiy, A., Springenberg, J.T., Brox, T.: Learning to generate chairs with convolutional neural networks. In: CVPR, pp. 1538–1546. IEEE Computer Society (2015)Google Scholar
  8. 8.
    Gao, W., Nallaperuma, S., Neumann, F.: Feature-based diversity optimization for problem instance classification. In: Handl, J., Hart, E., Lewis, P.R., López-Ibáñez, M., Ochoa, G., Paechter, B. (eds.) PPSN 2016. LNCS, vol. 9921, pp. 869–879. Springer, Cham (2016). Scholar
  9. 9.
    Gatys, L.A., Ecker, A.S., Bethge, M.: Texture synthesis using convolutional neural networks. In: NIPS, pp. 262–270 (2015)Google Scholar
  10. 10.
    Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: CVPR, pp. 2414–2423. IEEE Computer Society (2016)Google Scholar
  11. 11.
    Gatys, L.A., Ecker, A.S., Bethge, M., Hertzmann, A., Shechtman, E.: Controlling perceptual factors in neural style transfer. In: CVPR, pp. 3730–3738. IEEE Computer Society (2017)Google Scholar
  12. 12.
    Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)Google Scholar
  13. 13.
    Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)Google Scholar
  14. 14.
    Hansen, N., Müller, S.D., Koumoutsakos, P.: Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evol. Comput. 11(1), 1–18 (2003)CrossRefGoogle Scholar
  15. 15.
    den Heijer, E., Eiben, A.E.: Investigating aesthetic measures for unsupervised evolutionary art. Swarm Evol. Comput. 16, 52–68 (2014)CrossRefGoogle Scholar
  16. 16.
    Kowaliw, T., Dorin, A., McCormack, J.: Promoting creative design in interactive evolutionary computation. IEEE Trans. Evol. Comput. 16(4), 523–536 (2012)CrossRefGoogle Scholar
  17. 17.
    Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: ICCV (2015)Google Scholar
  18. 18.
    Matkovic, K., Neumann, L., Neumann, A., Psik, T., Purgathofer, W.: Global contrast factor-a new approach to image contrast. In: Computational Aesthetics, pp. 159–168 (2005)Google Scholar
  19. 19.
    Nair, V., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: ICML, pp. 807–814 (2010)Google Scholar
  20. 20.
    Neumann, A., Alexander, B., Neumann, F.: The evolutionary process of image transition in conjunction with box and strip mutation. In: Hirose, A., Ozawa, S., Doya, K., Ikeda, K., Lee, M., Liu, D. (eds.) ICONIP 2016. LNCS, vol. 9949, pp. 261–268. Springer, Cham (2016). Scholar
  21. 21.
    Neumann, A., Alexander, B., Neumann, F.: Evolutionary image transition using random walks. In: Correia, J., Ciesielski, V., Liapis, A. (eds.) EvoMUSART 2017. LNCS, vol. 10198, pp. 230–245. Springer, Cham (2017). Scholar
  22. 22.
    Neumann, A., Neumann, F.: Evolutionary computation for digital art. In: GECCO, pp. 937–955. ACM (2018)Google Scholar
  23. 23.
    Neumann, A., Szpak, Z.L., Chojnacki, W., Neumann, F.: Evolutionary image composition using feature covariance matrices. In: GECCO, pp. 817–824 (2017)Google Scholar
  24. 24.
    Nguyen, A., Dosovitskiy, A., Yosinski, J., Brox, T., Clune, J.: Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. In: NIPS, pp. 3387–3395 (2016)Google Scholar
  25. 25.
    Nixon, M., Aguado, A.S.: Feature Extraction & Image Processing, 2 edn. Academic Press, Boston (2008)Google Scholar
  26. 26.
    Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015)

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Aneta Neumann
    • 1
    Email author
  • Christo Pyromallis
    • 1
  • Bradley Alexander
    • 1
  1. 1.Optimisation and Logistics, School of Computer ScienceThe University of AdelaideAdelaideAustralia

Personalised recommendations