Transferring GANs: Generating Images from Limited Data

  • Yaxing WangEmail author
  • Chenshen Wu
  • Luis Herranz
  • Joost van de Weijer
  • Abel Gonzalez-Garcia
  • Bogdan Raducanu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11210)


Transferring knowledge of pre-trained networks to new domains by means of fine-tuning is a widely used practice for applications based on discriminative models. To the best of our knowledge this practice has not been studied within the context of generative deep networks. Therefore, we study domain adaptation applied to image generation with generative adversarial networks. We evaluate several aspects of domain adaptation, including the impact of target domain size, the relative distance between source and target domain, and the initialization of conditional GANs. Our results show that using knowledge from pre-trained networks can shorten the convergence time and can significantly improve the quality of the generated images, especially when target data is limited. We show that these conclusions can also be drawn for conditional GANs even when the pre-trained model was trained without conditioning. Our results also suggest that density is more important than diversity and a dataset with one or few densely sampled classes is a better source model than more diverse datasets such as ImageNet or Places.


Generative adversarial networks Transfer learning Domain adaptation Image generation 



Y. Wang and C. Wu acknowledge the Chinese Scholarship Council (CSC) grant No. 201507040048 and No. 201709110103. L. Herranz acknowledges the European Union research and innovation program under the Marie Skodowska-Curie grant agreement No. 6655919. This work was supported by TIN2016-79717-R, and the CHISTERA project M2CR (PCIN-2015-251) of the Spanish Ministry, the CERCA Program of the Generalitat de Catalunya. We also acknowledge the generous GPU support from NVIDIA.

Supplementary material

474211_1_En_14_MOESM1_ESM.pdf (10.4 mb)
Supplementary material 1 (pdf 10604 KB)


  1. 1.
    Arjovsky, M., Bottou, L.: Towards principled methods for training generative adversarial networks. In: ICLR (2017)Google Scholar
  2. 2.
    Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein generative adversarial networks. In: ICML, pp. 214–223 (2017)Google Scholar
  3. 3.
    Azizpour, H., Razavian, A.S., Sullivan, J., Maki, A., Carlsson, S.: Factors of transferability for a generic convnet representation. IEEE Trans. PAMI 38(9), 1790–1802 (2016)CrossRefGoogle Scholar
  4. 4.
    Borji, A.: Pros and Cons of GAN evaluation measures. arXiv preprint arXiv:1802.03446 (2018)
  5. 5.
    Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., Abbeel, P.: Infogan: interpretable representation learning by information maximizing generative adversarial nets. In: NIPS (2016)Google Scholar
  6. 6.
    Cordts, M., et al.: The cityscapes dataset for semantic urban scene understanding. In: CVPR, pp. 3213–3223 (2016)Google Scholar
  7. 7.
    Danihelka, I., Lakshminarayanan, B., Uria, B., Wierstra, D., Dayan, P.: Comparison of maximum likelihood and GAN-based training of real NVPs. arXiv preprint arXiv:1705.05263 (2017)
  8. 8.
    Denton, E.L., Chintala, S., Fergus, R., et al.: Deep generative image models using a Laplacian pyramid of adversarial networks. In: NIPS, pp. 1486–1494 (2015)Google Scholar
  9. 9.
    Donahue, J., et al.: DeCAF: a deep convolutional activation feature for generic visual recognition. In: ICML, pp. 647–655 (2014)Google Scholar
  10. 10.
    Dumoulin, V., et al.: Adversarially learned inference. In: ICLR (2017)Google Scholar
  11. 11.
    Dumoulin, V., Shlens, J., Kudlur, M.: A learned representation for artistic style. In: ICLR (2017)Google Scholar
  12. 12.
    Ganin, Y., et al.: Domain-adversarial training of neural networks. JMLR 17(1), 2030–2096 (2016)MathSciNetzbMATHGoogle Scholar
  13. 13.
    Goodfellow, I., et al.: Generative adversarial nets. In: NIPS, pp. 2672–2680 (2014)Google Scholar
  14. 14.
    Grinblat, G.L., Uzal, L.C., Granitto, P.M.: Class-splitting generative adversarial networks. arXiv preprint arXiv:1709.07359 (2017)
  15. 15.
    Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.C.: Improved training of Wasserstein GANs. In: NIPS, pp. 5769–5779 (2017)Google Scholar
  16. 16.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)Google Scholar
  17. 17.
    Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Klambauer, G., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a Nash equilibrium. In: NIPS (2017)Google Scholar
  18. 18.
    Hu, J., Lu, J., Tan, Y.P.: Deep transfer metric learning. In: CVPR, pp. 325–333. IEEE (2015)Google Scholar
  19. 19.
    Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: a database for studying face recognition in unconstrained environments. In: Workshop on Faces in ‘Real-Life’ Images: Detection, Alignment, and Recognition (2008)Google Scholar
  20. 20.
    Huang, X., Li, Y., Poursaeed, O., Hopcroft, J., Belongie, S.: Stacked generative adversarial networks. In: CVPR, vol. 2, p. 4 (2017)Google Scholar
  21. 21.
    Im, D.J., Ma, H., Taylor, G., Branson, K.: Quantitatively evaluating GANs with divergences proposed for training. In: ICLR (2018)Google Scholar
  22. 22.
    Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of GANs for improved quality, stability, and variation. In: ICLR (2018)Google Scholar
  23. 23.
    Kim, T., Cha, M., Kim, H., Lee, J., Kim, J.: Learning to discover cross-domain relations with generative adversarial networks. In: ICML, pp. 1857–1865 (2017)Google Scholar
  24. 24.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2014)Google Scholar
  25. 25.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS, pp. 1097–1105 (2012)Google Scholar
  26. 26.
    Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR (2016)Google Scholar
  27. 27.
    Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: ICCV, pp. 3730–3738 (2015)Google Scholar
  28. 28.
    Ma, L., Jia, X., Sun, Q., Schiele, B., Tuytelaars, T., Van Gool, L.: Pose guided person image generation. In: NIPS, pp. 405–415 (2017)Google Scholar
  29. 29.
    Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014)
  30. 30.
    Miyato, T., Koyama, M.: CGANs with projection discriminator. In: ICLR (2018)Google Scholar
  31. 31.
    Nilsback, M.E., Zisserman, A.: Automated flower classification over a large number of classes. In: ICVGIP, pp. 722–729. IEEE (2008)Google Scholar
  32. 32.
    Odena, A., Olah, C., Shlens, J.: Conditional image synthesis with auxiliary classifier GANs. In: ICML (2016)Google Scholar
  33. 33.
    Oquab, M., Bottou, L., Laptev, I., Sivic, J.: Learning and transferring mid-level image representations using convolutional neural networks. In: CVPR, pp. 1717–1724. IEEE (2014)Google Scholar
  34. 34.
    Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010)CrossRefGoogle Scholar
  35. 35.
    Perarnau, G., van de Weijer, J., Raducanu, B., Álvarez, J.M.: Invertible conditional GANs for image editing. In: NIPS 2016 Workshop on Adversarial Training (2016)Google Scholar
  36. 36.
    Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. In: ICLR (2015)Google Scholar
  37. 37.
    Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., Lee, H.: Generative adversarial text to image synthesis. In: ICML, pp. 1060–1069 (2016)Google Scholar
  38. 38.
    Russakovsky, O., et al.: Imagenet large scale visual recognition challenge. IJCV 115(3), 211–252 (2015)MathSciNetCrossRefGoogle Scholar
  39. 39.
    Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.: Improved techniques for training GANs. In: NIPS, pp. 2234–2242 (2016)Google Scholar
  40. 40.
    Smith, E., Meger, D.: Improved adversarial systems for 3D object generation and reconstruction. arXiv preprint arXiv:1707.09557 (2017)
  41. 41.
    Sricharan, K., Bala, R., Shreve, M., Ding, H., Saketh, K., Sun, J.: Semi-supervised conditional GANs. arXiv preprint arXiv:1708.05789 (2017)
  42. 42.
    Theis, L., van den Oord, A., Bethge, M.: A note on the evaluation of generative models. In: ICLR (2015)Google Scholar
  43. 43.
    Tzeng, E., Hoffman, J., Darrell, T., Saenko, K.: Simultaneous deep transfer across domains and tasks. In: CVPR, pp. 4068–4076 (2015)Google Scholar
  44. 44.
    Wang, Y., Zhang, L., van de Weijer, J.: Ensembles of generative adversarial networks. In: NIPS 2016 Workshop on Adversarial Training (2016)Google Scholar
  45. 45.
    Yu, F., Zhang, Y., Song, S., Seff, A., Xiao, J.: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365 (2015)
  46. 46.
    Zhang, H., et al.: StackGAN: text to photo-realistic image synthesis with stacked generative adversarial networks. In: ICCV, pp. 5908–5916 (2017)Google Scholar
  47. 47.
    Zhang, Z., Song, Y., Qi, H.: Age progression/regression by conditional adversarial autoencoder. In: CVPR, vol. 2 (2017)Google Scholar
  48. 48.
    Zhou, B., Lapedriza, A., Xiao, J., Torralba, A., Oliva, A.: Learning deep features for scene recognition using places database. In: NIPS, pp. 487–495 (2014)Google Scholar
  49. 49.
    Zhou, Z., et al.: Activation maximization generative adversarial nets. In: ICLR (2018)Google Scholar
  50. 50.
    Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV, pp. 2242–2251 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Yaxing Wang
    • 1
    Email author
  • Chenshen Wu
    • 1
  • Luis Herranz
    • 1
  • Joost van de Weijer
    • 1
  • Abel Gonzalez-Garcia
    • 1
  • Bogdan Raducanu
    • 1
  1. 1.Computer Vision CenterUniversitat Autònoma de BarcelonaBarcelonaSpain

Personalised recommendations