Skip to main content

Affective Image Colorization

  • Regular Paper
  • Published:
Journal of Computer Science and Technology Aims and scope Submit manuscript

Abstract

Colorization of gray-scale images has attracted many attentions for a long time. An important role of image color is the conveyer of emotions (through color themes). The colorization with an undesired color theme is less useful, even it is semantically correct. However this has been rarely considered. Automatic colorization respecting both the semantics and the emotions is undoubtedly a challenge. In this paper, we propose a complete system for affective image colorization. We only need the user to assist object segmentation along with text labels and an affective word. First, the text labels along with other object characters are jointly used to filter the internet images to give each object a set of semantically correct reference images. Second, we select a set of color themes according to the affective word based on art theories. With these themes, a generic algorithm is used to select the best reference for each object, balancing various requirements. Finally, we propose a hybrid texture synthesis approach for colorization. To the best of our knowledge, it is the first system which is able to efficiently colorize a gray-scale image semantically by an emotionally controllable fashion. Our experiments show the effectiveness of our system, especially the benefit compared with the previous Markov random field (MRF) based method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
€32.70 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Netherlands)

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Levin A, Lischinski D, Weiss Y. Colorization using optimization. ACM Transactions on Graphics, 2004, 23(3): 689-694.

    Article  Google Scholar 

  2. Welsh T, Ashikhmin M, Mueller K. Transferring color to greyscale images. ACM Transactions on Graphics, 2002, 21(3): 277-280.

    Article  Google Scholar 

  3. Ironi R, Cohen-Or D, Lischinski D. Colorization by example. In Proc. the 16th Eurographics Workshop on Rendering Techniques, June 29-July 1, 2005, pp.201-210.

  4. Charpiat G, Hofmann M, Schölkopf B. Automatic image colorization via multimodal predictions. In Proc. the 10th ECCV, October 2008, pp.126-139.

  5. Tai Y W, Jia J Y, Tang C K. Local color transfer via probabilistic segmentation by expectation-maximization. In Proc. CVPR2005, June 2005, Vol.1, pp.747-754.

  6. Chia A Y S, Zhuo S J, Gupta R K, Tai Y W, Cho S Y, Tan P, Lin S. Semantic colorization with internet images. ACM Transactions on Graphics, 2011, 30(6), Article No.156.

  7. Arnheim R. Art and Visual Perception: A Psychology of the Creative Eye. University of California Press, 1954.

  8. Kobayashi S. Color Image Scale. Kosdansha International, 1992.

  9. Kobayashi S. Art of Color Combinations. Kosdansha International, 1995.

  10. Wang X H, Jia J, Liao H Y, Cai L H. Image colorization with an a®ective word. In Proc. Computational Visual Media Conference 2012, November 2012, pp.51-58.

  11. Xu K, Li Y, Ju T, Hu S M, Liu T Q. Efficient affinity-based edit propagation using K-D tree. ACM Transactions on Graphics, 2009, 28(5), Article No.118.

  12. Li Y, Ju T, Hu S M. Instant propagation of sparse edits on images and videos. Computer Graphics Forum, 2010, 29(7): 2049-2054.

    Article  Google Scholar 

  13. Huang Y C, Tung Y S, Chen J C, Wang S W, Wu J L. An adaptive edge detection based colorization algorithm and its applications. In Proc. the 13th MULTIMEDIA, November 2005, pp.351-354.

  14. Huang H, Zang Y, Li C F. Example-based painting guided by color features. The Visual Computer, 2010, 26(6/8): 933-942.

    Article  Google Scholar 

  15. Xiao X Z, Ma L Z. Gradient-preserving color transfer. Computer Graphics Forum, 2009, 28(6/8): 1879-1886.

    Article  Google Scholar 

  16. Reinhard E, Adhikhmin M, Gooch B, Shirley P. Color transfer between images. IEEE Computer Graphics and Applications, 2001, 21(5): 34-41.

    Article  Google Scholar 

  17. Chen T, Tan P, Ma L Q, Cheng M M, Shamir A, Hu S M. PoseShop: Human image database construction and personalized content synthesis. IEEE Transactions on Visualization and Computer Graphics, 2012, http://doi.ieeecomputersociety.org/10.1109/TVCG.2012.148, Sept. 2012.

  18. Huang H, Zhang L, Zhang H C. Arcimboldo-like collage using internet images. ACM Transactions on Graphics, 2011, 30(6), Article No.155.

  19. Liu H, Zhang L, Huang H. Web-image driven best views of 3D shapes. The Visual Computer, 2012, 28(3): 279-287.

    Article  Google Scholar 

  20. Chen T, Cheng M M, Tan P, Shamir A, Hu S M. Sketch2Photo: Internet image montage. ACM Transactions on Graphics, 2009, 28(5), Article No.124.

  21. Csurka G, Skaff S, Marchesotti L, Saunders C. Building look & feel concept models from color combinations: With applications in image dessification, retrieval, and color transfer. The Visual Computer, 2011, 27(12): 1039-1053.

    Article  Google Scholar 

  22. Cohen-Or D, Sorkine O, Gal R, Leyvand T, Xu Y Q. Color harmonization. ACM Transactions on Graphics, 2006, 25(3): 624-630.

    Article  Google Scholar 

  23. O'Donovan P, Agarwala A, Hertzmann A. Color compatibility from large datasets. ACM Transactions on Graphics, 2011, 30(4), Article No.63.

  24. Boykov Y, Funka-Lea G. Graph cuts and efficient N-D image segmentation. International Journal of Computer Vision, 2006, 70(2): 109-131.

    Article  Google Scholar 

  25. Rother C, Kolmogorov V, Blake A. “GrabCut”: Interactive foreground extraction using iterated graph cuts. ACM Transactions on Graphics, 2004, 23(3): 309-314.

    Article  Google Scholar 

  26. Cheng M M, Zhang F L, Mitra N J, Huang X L, Hu S M. Repfinder: Finding approximately repeated scene elements for image editing. ACM Transactions on Graphics, 2010, 29(4), Article, No. 83.

    Google Scholar 

  27. Cheng M M, Zhang G X, Mitra N J, Huang X L, Hu S M. Global contrast based salient region detection. In Proc. CVPR2011, June 2011, pp.409-416.

  28. Belongie S, Malik J, Puzicha J. Shape matching and object recognition using shape contexts. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002, 24(4): 509-522.

    Article  Google Scholar 

  29. Wang X H, Jia J, Cai L H. Affective image adjustment with a single word. To appear in The Visual Computer.

  30. Dong Z D, Dong Q. HowNet and the Computation of Meaning. World Scientific, 2006.

  31. Luxburg U. A tutorial on spectral clustering. Statistics and Computing, 2007, 17(4): 395-416.

    Article  MathSciNet  Google Scholar 

  32. Barnes C, Shechtman E, Finkelstein A, Goldman D B. Patch-match: A randomized correspondence algorithm for structural image editing. ACM Transactions on Graphics, 2009, 28(3), Article No.24.

  33. Chen J T, Wang B. Solid texture synthesis using position histogram matching. In Proc. the 11th CAD/Graphics, August 2009, pp.150-153.

  34. Sheikh H R, Bovik A C. Image information and visual quality. IEEE Transactions on Image Processing, 2006, 15(2): 430-444.

    Article  Google Scholar 

  35. Zhong F, Qin X Y, Peng Q S. Robust image segmentation against complex color distribution. The Visual Computer, 2011, 27(6-8): 707-716.

    Article  Google Scholar 

  36. Wu J L, Shen X Y, Liu L G. Interactive two-scale color-to-gray. The Visual Computer, 2012, 28(6-8): 723-731.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiao-Hui Wang.

Additional information

This work is supported by the National Basic Research 973 Program of China under Grant No. 2011CB302201, the National Natural Science Foundation of China under Grant Nos. 61003094, 60931160443. This work is also funded by Tsinghua National Laboratory for Information Science and Technology (TNList) Cross-Discipline Foundation of China, and supported by the Innovation Fund of Tsinghua-Tencent Joint Laboratory of China.

*The preliminary version of the paper was published in the Proceedings of the 2012 Computational Visual Media Conference.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Wang, XH., Jia, J., Liao, HY. et al. Affective Image Colorization. J. Comput. Sci. Technol. 27, 1119–1128 (2012). https://doi.org/10.1007/s11390-012-1290-4

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11390-012-1290-4

Keywords