Semi-supervised Adversarial Image-to-Image Translation

  • Jose Eusebio
  • Hemanth VenkateswaraEmail author
  • Sethuraman Panchanathan
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11010)


Image-to-image translation involves translating images in one domain into images in another domain, while keeping some aspects of the image consistent across the domains. Image translation models that keep the category of the image consistent can be useful for applications like domain adaptation. Generative models like variational autoencoders have the ability to extract latent factors of generation from an image. Based on generative models like variational autoencoders and generative adversarial networks, we develop a semi-supervised image-to-image translation procedure. We apply this procedure to perform image translation and domain adaptation for complex digit datasets.


Image-to-image translation Domain adaptation Generative adversarial networks Variational autoencoders 



The authors thank Arizona State University and National Science Foundation for their funding support. This material is partially based upon work supported by the National Science Foundation under Grant No. 1116360.


  1. 1.
    Bengio, Y.: Deep learning of representations for unsupervised and transfer learning. In: Proceedings of ICML Workshop on Unsupervised and Transfer Learning, pp. 17–36 (2012)Google Scholar
  2. 2.
    Bousmalis, K., Silberman, N., Dohan, D., Erhan, D., Krishnan, D.: Unsupervised pixel-level domain adaptation with generative adversarial networks. In: CVPR (2017)Google Scholar
  3. 3.
    Ganin, Y., Lempitsky, V.: Unsupervised domain adaptation by backpropagation. In: ICML (2015)Google Scholar
  4. 4.
    Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: CVPR, pp. 2414–2423. IEEE (2016)Google Scholar
  5. 5.
    Goodfellow, I., et al.: Generative adversarial nets. In: NIPS (2014)Google Scholar
  6. 6.
    Kingma, D.P., Welling, M.: Auto-encoding variational Bayes. In: ICLR (2014)Google Scholar
  7. 7.
    Larsen, A.B.L., Sønderby, S.K., Larochelle, H., Winther, O.: Autoencoding beyond pixels using a learned similarity metric. In: ICML (2016)Google Scholar
  8. 8.
    LeCun, Y., Cortes, C., Burges, C.J.: The MNIST database of handwritten digits (1998)Google Scholar
  9. 9.
    Liu, M.Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. In: NIPS (2017)Google Scholar
  10. 10.
    Long, M., Cao, Y., Wang, J., Jordan, M.I.: Learning transferable features with deep adaptation networks. In: ICML (2015)Google Scholar
  11. 11.
    Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., Ng, A.Y.: Reading digits in natural images with unsupervised feature learning. In: NIPS Workshop on Deep Learning and Unsupervised Feature Learning (2011)Google Scholar
  12. 12.
    Russo, P., Carlucci, F.M., Tommasi, T., Caputo, B.: From source to target and back: Symmetric bi-directional adaptive GAN. In: CVPR (2018)Google Scholar
  13. 13.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 (2014)Google Scholar
  14. 14.
    Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Jose Eusebio
    • 1
  • Hemanth Venkateswara
    • 1
    Email author
  • Sethuraman Panchanathan
    • 1
  1. 1.Center for Cognitive Ubiquitous Computing (CUbiC)Arizona State UniversityTempeUSA

Personalised recommendations