Skip to main content

Semantic-Sparse Colorization Network for Deep Exemplar-Based Colorization

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13666))

Included in the following conference series:

Abstract

Exemplar-based colorization approaches rely on reference image to provide plausible colors for target gray-scale image. The key and difficulty of exemplar-based colorization is to establish an accurate correspondence between these two images. Previous approaches have attempted to construct such a correspondence but are faced with two obstacles. First, using luminance channel for the calculation of correspondence is inaccurate. Second, the dense correspondence they built introduces wrong matching results and increases the computation burden. To address these two problems, we propose Semantic-Sparse Colorization Network (SSCN) to transfer both the global image style and detailed semantic-related colors to the gray-scale image in a coarse-to-fine manner. Our network can perfectly balance the global and local colors while alleviating the ambiguous matching problem. Experiments show that our method outperforms existing methods in both quantitative and qualitative evaluation and achieves state-of-the-art performance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Bahng, H., et al.: Coloring with words: guiding image colorization through text-based palette generation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11216, pp. 443–459. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01258-8_27

    Chapter  Google Scholar 

  2. Bugeau, A., Ta, V., Papadakis, N.: Variational exemplar-based image colorization. IEEE Trans. Image Process. 23(1), 298–307 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  3. Cao, Y., Zhou, Z., Zhang, W., Yu, Y.: Unsupervised diverse colorization via generative adversarial networks. In: Ceci, M., Hollmén, J., Todorovski, L., Vens, C., Džeroski, S. (eds.) ECML PKDD 2017. LNCS (LNAI), vol. 10534, pp. 151–166. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-71249-9_10

    Chapter  Google Scholar 

  4. Charpiat, G., Hofmann, M., Schölkopf, B.: Automatic image colorization via multimodal predictions. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5304, pp. 126–139. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-88690-7_10

    Chapter  Google Scholar 

  5. Cheng, Z., Yang, Q., Sheng, B.: Deep colorization. In: ICCV 2015, pp. 415–423. IEEE Computer Society (2015)

    Google Scholar 

  6. Chia, A.Y.S., et al.: Semantic colorization with internet images. ACM Trans. Graph. 30(6), 156 (2011)

    Article  MathSciNet  Google Scholar 

  7. Ci, Y., Ma, X., Wang, Z., Li, H., Luo, Z.: User-guided deep anime line art colorization with conditional adversarial networks. In: MM 2018, pp. 1536–1544. ACM (2018)

    Google Scholar 

  8. Deng, J., Dong, W., Socher, R., Li, L., Li, K., Li, F.: Imagenet: a large-scale hierarchical image database. In: CVPR 2009, pp. 248–255. IEEE Computer Society (2009)

    Google Scholar 

  9. Deshpande, A., Lu, J., Yeh, M., Chong, M.J., Forsyth, D.A.: Learning diverse image colorization. In: CVPR 2017, pp. 2877–2885. IEEE Computer Society (2017)

    Google Scholar 

  10. Gupta, R.K., Chia, A.Y.S., Rajan, D., Ng, E.S., Huang, Z.: Image colorization using similar images. In: MM 2012, pp. 369–378. ACM (2012)

    Google Scholar 

  11. He, M., Chen, D., Liao, J., Sander, P.V., Yuan, L.: Deep exemplar-based colorization. ACM Trans. Graph. 37(4), 47:1–47:16 (2018)

    Google Scholar 

  12. Huang, X., Belongie, S.J.: Arbitrary style transfer in real-time with adaptive instance normalization. In: IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, 22–29 October 2017, pp. 1510–1519. IEEE Computer Society (2017)

    Google Scholar 

  13. Huang, Y., Tung, Y., Chen, J., Wang, S., Wu, J.: An adaptive edge detection based colorization algorithm and its applications. In: MM 2005, pp. 351–354. ACM (2005)

    Google Scholar 

  14. Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_43

    Chapter  Google Scholar 

  15. Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, 16–20 June 2019, pp. 4401–4410. Computer Vision Foundation/IEEE (2019)

    Google Scholar 

  16. Kim, E., Lee, S., Park, J., Choi, S., Seo, C., Choo, J.: Deep edge-aware interactive colorization against color-bleeding effects. CoRR (2021)

    Google Scholar 

  17. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR 2015 (2015)

    Google Scholar 

  18. Kumar, M., Weissenborn, D., Kalchbrenner, N.: Colorization transformer. CoRR arXiv:2102.04432 (2021)

  19. Lee, J., Kim, E., Lee, Y., Kim, D., Chang, J., Choo, J.: Reference-based sketch image colorization using augmented-self reference and dense semantic correspondence. In: CVPR 2020, pp. 5800–5809. IEEE Computer Society (2020)

    Google Scholar 

  20. Levin, A., Lischinski, D., Weiss, Y.: Colorization using optimization. ACM Trans. Graph. 23(3), 689–694 (2004)

    Article  Google Scholar 

  21. Li, H., Sheng, B., Li, P., Ali, R., Chen, C.L.P.: Globally and locally semantic colorization via exemplar-based broad-GAN. IEEE Trans. Image Process. 30, 8526–8539 (2021)

    Article  Google Scholar 

  22. Liao, J., Yao, Y., Yuan, L., Hua, G., Kang, S.B.: Visual attribute transfer through deep image analogy. ACM Trans. Graph. 36(4), 120:1–120:15 (2017)

    Google Scholar 

  23. Lu, P., Yu, J., Peng, X., Zhao, Z., Wang, X.: Gray2colornet: transfer more colors from reference image. In: MM 2020, pp. 3210–3218. ACM (2020)

    Google Scholar 

  24. Luan, Q., Wen, F., Cohen-Or, D., Liang, L., Xu, Y., Shum, H.: Natural image colorization. In: Proceedings of the Eurographics Symposium on Rendering Techniques 2007, pp. 309–320. Eurographics Association (2007)

    Google Scholar 

  25. Manjunatha, V., Iyyer, M., Boyd-Graber, J.L., Davis, L.S.: Learning to color from language. In: NAACL-HLT 2018, pp. 764–769. Association for Computational Linguistics (2018)

    Google Scholar 

  26. Qu, Y., Wong, T., Heng, P.: Manga colorization. ACM Trans. Graph. 25(3), 1214–1220 (2006)

    Article  Google Scholar 

  27. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  28. Sangkloy, P., Lu, J., Fang, C., Yu, F., Hays, J.: Scribbler: controlling deep image synthesis with sketch and color. In: CVPR 2017, pp. 6836–6845. IEEE Computer Society (2017)

    Google Scholar 

  29. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR 2015 (2015)

    Google Scholar 

  30. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, pp. 5998–6008 (2017)

    Google Scholar 

  31. Vitoria, P., Raad, L., Ballester, C.: Chromagan: adversarial picture colorization with semantic class distribution. In: WACV 2020, pp. 2434–2443. IEEE (2020)

    Google Scholar 

  32. Wu, Y., Wang, X., Li, Y., Zhang, H., Zhao, X., Shan, Y.: Towards vivid and diverse image colorization with generative color prior. CoRR (2021)

    Google Scholar 

  33. Xiao, C., et al.: Example-based colourization via dense encoding pyramids. Comput. Graph. Forum 39(1), 20–33 (2020)

    Article  Google Scholar 

  34. Xu, K., Li, Y., Ju, T., Hu, S., Liu, T.: Efficient affinity-based edit propagation using K-D tree. ACM Trans. Graph. 28(5), 118 (2009)

    Article  Google Scholar 

  35. Xu, Z., Wang, T., Fang, F., Sheng, Y., Zhang, G.: Stylization-based architecture for fast deep exemplar colorization. In: CVPR 2020, pp. 9360–9369. IEEE (2020)

    Google Scholar 

  36. Yin, W., Lu, P., Zhao, Z., Peng, X.: Yes, “attention is all you need”, for exemplar based colorization. In: MM 2021: ACM Multimedia Conference, Virtual Event, China, 20–24 October 2021, pp. 2243–2251. ACM (2021)

    Google Scholar 

  37. Yoo, S., Bahng, H., Chung, S., Lee, J., Chang, J., Choo, J.: Coloring with limited data: few-shot colorization via memory augmented networks. In: CVPR 2019, pp. 11283–11292. Computer Vision Foundation/IEEE (2019)

    Google Scholar 

  38. Zhang, B., et al.: Deep exemplar-based video colorization. In: CVPR 2019, pp. 8052–8061. Computer Vision Foundation/IEEE (2019)

    Google Scholar 

  39. Zhang, J., et al.: Scsnet: an efficient paradigm for learning simultaneously image colorization and super-resolution. CoRR (2022)

    Google Scholar 

  40. Zhang, R., Isola, P., Efros, A.A.: Colorful image colorization. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 649–666. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46487-9_40

    Chapter  Google Scholar 

  41. Zhang, R., et al.: Real-time user-guided image colorization with learned deep priors. ACM Trans. Graph. 36(4), 119:1–119:11 (2017)

    Google Scholar 

  42. Zhou, B., Khosla, A., Lapedriza, À., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: CVPR 2016, pp. 2921–2929. IEEE Computer Society (2016)

    Google Scholar 

Download references

Acknowledgment

This work was supported by SZSTC Grant No. JCYJ201908 09172201639 and WDZC20200820200655001, Shenzhen Key Laboratory ZDSYS20210623092001004.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chun Yuan .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 4747 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bai, Y., Dong, C., Chai, Z., Wang, A., Xu, Z., Yuan, C. (2022). Semantic-Sparse Colorization Network for Deep Exemplar-Based Colorization. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13666. Springer, Cham. https://doi.org/10.1007/978-3-031-20068-7_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20068-7_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20067-0

  • Online ISBN: 978-3-031-20068-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics