Advertisement

Cross-Domain Interpolation for Unpaired Image-to-Image Translation

  • Jorge LópezEmail author
  • Antoni Mauricio
  • Jose Díaz
  • Guillermo Cámara
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11754)

Abstract

Unpaired Image-to-image translation is a brand new challenging problem that consists of latent vectors extracting and matching from a source domain A and a target domain B. Both latent spaces are matched and interpolated by a directed correspondence function F for \(A \rightarrow B\) and G for \(B \rightarrow A\). The current efforts point to Generative Adversarial Networks (GANs) based models due they synthesize new quite realistic samples across different domains by learning critical features from their latent spaces. Nonetheless, domain exploration is not explicit supervision; thereby most GANs based models do not achieve to learn the key features. In consequence, the correspondence function overfits and fails in reverse or loses translation quality. In this paper, we propose a guided learning model through manifold bi-directional translation loops between the source and the target domains considering the Wasserstein distance between their probability distributions. The bi-directional translation is CycleGAN-based but considering the latent space Z as an intermediate domain which guides the learning process and reduces the inducted error from loops. We show experimental results in several public datasets including Cityscapes, Horse2zebra, and Monet2photo at the EECS-Berkeley webpage (http://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/). Our results are competitive to the state-of-the-art regarding visual quality, stability, and other baseline metrics.

Keywords

Image-to-image translation Generative Adversarial Network Cross-domain interpolation 

References

  1. 1.
    Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein GAN. arXiv preprint arXiv:1701.07875 (2017)
  2. 2.
    Benaim, S., Wolf, L.: One-shot unsupervised cross domain translation. In: Advances in Neural Information Processing Systems, pp. 2104–2114 (2018)Google Scholar
  3. 3.
    Gatys, L., Ecker, A.S., Bethge, M.: Texture synthesis using convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 262–270 (2015)Google Scholar
  4. 4.
    Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)zbMATHGoogle Scholar
  5. 5.
    Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)Google Scholar
  6. 6.
    Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.C.: Improved training of Wasserstein GANs. In: Advances in Neural Information Processing Systems, pp. 5767–5777 (2017)Google Scholar
  7. 7.
    Hertzmann, A., Jacobs, C.E., Oliver, N., Curless, B., Salesin, D.H.: Image analogies. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, pp. 327–340. ACM (2001)Google Scholar
  8. 8.
    Hiasa, Y., et al.: Cross-modality image synthesis from unpaired data using CycleGAN. In: Gooya, A., Goksel, O., Oguz, I., Burgos, N. (eds.) SASHIMI 2018. LNCS, vol. 11037, pp. 31–41. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00536-8_4CrossRefGoogle Scholar
  9. 9.
    Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017)Google Scholar
  10. 10.
    Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4681–4690 (2017)Google Scholar
  11. 11.
    Li, M., Huang, H., Ma, L., Liu, W., Zhang, T., Jiang, Y.: Unsupervised image-to-image translation with stacked cycle-consistent adversarial networks. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 184–199 (2018)CrossRefGoogle Scholar
  12. 12.
    Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)Google Scholar
  13. 13.
    Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Paul Smolley, S.: Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802 (2017)Google Scholar
  14. 14.
    Mauricio, A., López, J., Huauya, R., Diaz, J.: High-resolution generative adversarial neural networks applied to histological images generation. In: Kůrková, V., Manolopoulos, Y., Hammer, B., Iliadis, L., Maglogiannis, I. (eds.) ICANN 2018. LNCS, vol. 11140, pp. 195–202. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01421-6_20CrossRefGoogle Scholar
  15. 15.
    Miyato, T., Kataoka, T., Koyama, M., Yoshida, Y.: Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957 (2018)
  16. 16.
    Yi, Z., Zhang, H., Tan, P., Gong, M.: DualGAN: unsupervised dual learning for image-to-image translation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2849–2857 (2017)Google Scholar
  17. 17.
    Zhu, J.-Y., Krähenbühl, P., Shechtman, E., Efros, A.A.: Generative visual manipulation on the natural image manifold. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9909, pp. 597–613. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46454-1_36CrossRefGoogle Scholar
  18. 18.
    Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 2223–2232 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversidad Católica San PabloArequipaPeru
  2. 2.Universidad Nacional de IngenieríaLimaPeru
  3. 3.Department of Computer ScienceFederal University of Ouro PretoOuro PretoBrazil

Personalised recommendations