Skip to main content

CA-GAN: Weakly Supervised Color Aware GAN for Controllable Makeup Transfer

  • Conference paper
  • First Online:
Computer Vision – ECCV 2020 Workshops (ECCV 2020)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12537))

Included in the following conference series:

Abstract

While existing makeup style transfer models perform an image synthesis whose results cannot be explicitly controlled, the ability to modify makeup color continuously is a desirable property for virtual try-on applications. We propose a new formulation for the makeup style transfer task, with the objective to learn a color controllable makeup style synthesis. We introduce CA-GAN, a generative model that learns to modify the color of specific objects (e.g. lips or eyes) in the image to an arbitrary target color while preserving background. Since color labels are rare and costly to acquire, our method leverages weakly supervised learning for conditional GANs. This enables to learn a controllable synthesis of complex objects, and only requires a weak proxy of the image attribute that we desire to modify. Finally, we present for the first time a quantitative analysis of makeup style transfer and color control performance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Available upon demand at contact.ia@rd.loreal.com.

  2. 2.

    Also accessible at https://robinkips.github.io/CA-GAN/.

References

  1. Abadi, M., et al.: Tensorflow: a system for large-scale machine learning. In: Operating Systems Design and Implementation, pp. 265–283 (2016)

    Google Scholar 

  2. Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein generative adversarial networks. In: International Conference on Machine Learning, pp. 214–223 (2017)

    Google Scholar 

  3. Chang, H., Lu, J., Yu, F., Finkelstein, A.: Pairedcyclegan: asymmetric style transfer for applying and removing makeup. In: Computer Vision and Pattern Recognition, pp. 40–48 (2018)

    Google Scholar 

  4. Chen, H.J., Hui, K.M., Wang, S.Y., Tsao, L.W., Shuai, H.H., Cheng, W.H.: Beautyglow: on-demand makeup transfer framework with reversible generative network. In: Computer Vision and Pattern Recognition, pp. 10042–10050 (2019)

    Google Scholar 

  5. Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., Abbeel, P.: Infogan: interpretable representation learning by information maximizing generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2172–2180 (2016)

    Google Scholar 

  6. Choi, Y., Choi, M., Kim, M., Ha, J.W., Kim, S., Choo, J.: Stargan: unified generative adversarial networks for multi-domain image-to-image translation. In: Conference on Computer Vision and Pattern Recognition, pp. 8789–8797 (2018)

    Google Scholar 

  7. Choi, Y., Uh, Y., Yoo, J., Ha, J.W.: Stargan v2: diverse image synthesis for multiple domains. In: Conference on Computer Vision and Pattern Recognition, pp. 8188–8197 (2020)

    Google Scholar 

  8. Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: Computer Vision and Pattern Recognition, pp. 2414–2423 (2016)

    Google Scholar 

  9. Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)

    Google Scholar 

  10. Gu, Q., Wang, G., Chiu, M.T., Tai, Y.W., Tang, C.K.: Ladn: local adversarial disentangling network for facial makeup and de-makeup. In: International Conference on Computer Vision, pp. 10481–10490 (2019)

    Google Scholar 

  11. Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.C.: Improved training of Wasserstein GANs. In: Advances in Neural Information Processing Systems, pp. 5767–5777 (2017)

    Google Scholar 

  12. Guo, D., Sim, T.: Digital face makeup by example. In: Computer Vision and Pattern Recognition, pp. 73–79. IEEE (2009)

    Google Scholar 

  13. Hertzmann, A., Jacobs, C.E., Oliver, N., Curless, B., Salesin, D.H.: Image analogies. In: Computer Graphics And Interactive Techniques, pp. 327–340 (2001)

    Google Scholar 

  14. Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: International Conference on Computer Vision, pp. 1501–1510 (2017)

    Google Scholar 

  15. Iizuka, S., Simo-Serra, E., Ishikawa, H.: Globally and locally consistent image completion. ACM Trans. Graph. 36(4), 1–14 (2017)

    Article  Google Scholar 

  16. Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Computer Vision and Pattern Recognition, pp. 1125–1134 (2017)

    Google Scholar 

  17. Jiang, H., Sun, D., Jampani, V., Yang, M.H., Learned-Miller, E., Kautz, J.: Super slomo: high quality estimation of multiple intermediate frames for video interpolation. In: Conference on Computer Vision and Pattern Recognition, pp. 9000–9008 (2018)

    Google Scholar 

  18. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Computer Vision and Pattern Recognition, pp. 4401–4410 (2019)

    Google Scholar 

  19. Kazemi, V., Sullivan, J.: One millisecond face alignment with an ensemble of regression trees. In: Conference on Computer Vision and Pattern Recognition, pp. 1867–1874 (2014)

    Google Scholar 

  20. King, D.E.: Dlib-ml: a machine learning toolkit. J. Mach. Learn. Res. 10(Jul), 1755–1758 (2009)

    Google Scholar 

  21. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: International Conference for Learning Representations (2015)

    Google Scholar 

  22. Kips, R., Tran, L., Malherbe, E., Perrot, M.: Beyond color correction: skin color estimation in the wild through deep learning. Electron. Imaging (2020)

    Google Scholar 

  23. Lample, G., Zeghidour, N., Usunier, N., Bordes, A., Denoyer, L., Ranzato, M.: Fader networks: manipulating images by sliding attributes. In: Advances in Neural Information Processing Systems, pp. 5967–5976 (2017)

    Google Scholar 

  24. Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: Conference on Computer Vision and Pattern Recognition, pp. 4681–4690 (2017)

    Google Scholar 

  25. Li, C., Zhou, K., Lin, S.: Simulating makeup through physics-based manipulation of intrinsic image layers. In: Conference on Computer Vision and Pattern Recognition, pp. 4621–4629 (2015)

    Google Scholar 

  26. Li, T., et al.: Beautygan: instance-level facial makeup transfer with deep generative adversarial network. In: International Conference on Multimedia, pp. 645–653 (2018)

    Google Scholar 

  27. Liu, S., Ou, X., Qian, R., Wang, W., Cao, X.: Makeup like a superstar: deep localized makeup transfer network. In: IJCAI (2016)

    Google Scholar 

  28. McLaren, K.: XIII-The development of the CIE 1976 (L* a* b*) uniform colour space and colour-difference formula. J. Soc. Dyers Colour. 92(9), 338–341 (1976)

    Article  Google Scholar 

  29. Modiface Inc: Modiface - augmented reality. http://modiface.com/. Accessed 24 Feb 2020

  30. Perfect Corp.: Perfect corp. - virtual makeup. https://www.perfectcorp.com/business/products/virtual-makeup. Accessed 24 Feb 2020

  31. Portenier, T., Hu, Q., Szabo, A., Bigdeli, S.A., Favaro, P., Zwicker, M.: Faceshop: deep sketch-based face image editing. ACM Trans. Graph. 37(4) (2018)

    Google Scholar 

  32. Sokal, K., Kazakou, S., Kibalchich, I., Zhdanovich, M.: High-quality AR lipstick simulation via image filtering techniques. In: CVPR Workshop on Computer Vision for Augmented and Virtual Reality (2019)

    Google Scholar 

  33. Tong, W.S., Tang, C.K., Brown, M.S., Xu, Y.Q.: Example-based cosmetic transfer. In: Pacific Conference on Computer Graphics and Applications, pp. 211–218 (2007)

    Google Scholar 

  34. Voynov, A., Babenko, A.: Unsupervised discovery of interpretable directions in the GAN latent space. arXiv preprint arXiv:2002.03754 (2020)

  35. Wang, Z., Simoncelli, E.P., Bovik, A.C.: Multiscale structural similarity for image quality assessment. In: Thirty-Seventh Asilomar Conference on Signals, Systems & Computers, vol. 2, pp. 1398–1402 (2003)

    Google Scholar 

  36. Zhang, H., Chen, W., He, H., Jin, Y.: Disentangled makeup transfer with generative adversarial network. arXiv preprint arXiv:1907.01144 (2019)

  37. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: International Conference on Computer Vision, October 2017

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Robin Kips .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (mp4 647 KB)

Supplementary material 2 (mp4 3585 KB)

Supplementary material 3 (mp4 2935 KB)

Supplementary material 4 (mp4 4988 KB)

Supplementary material 5 (mp4 1870 KB)

Supplementary material 6 (pdf 25990 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kips, R., Gori, P., Perrot, M., Bloch, I. (2020). CA-GAN: Weakly Supervised Color Aware GAN for Controllable Makeup Transfer. In: Bartoli, A., Fusiello, A. (eds) Computer Vision – ECCV 2020 Workshops. ECCV 2020. Lecture Notes in Computer Science(), vol 12537. Springer, Cham. https://doi.org/10.1007/978-3-030-67070-2_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-67070-2_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-67069-6

  • Online ISBN: 978-3-030-67070-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics