Abstract
We develop an automated video colorization framework that minimizes the flickering of colors across frames. If we apply image colorization techniques to successive frames of a video, they treat each frame as a separate colorization task. Thus, they do not necessarily maintain the colors of a scene consistently across subsequent frames. The proposed solution includes a novel deep recurrent encoder-decoder architecture which is capable of maintaining temporal and contextual coherence between consecutive frames of a video. We use a high-level semantic feature extractor to automatically identify the context of a scenario including objects, with a custom fusion layer that combines the spatial and temporal features of a frame sequence. We demonstrate experimental results, qualitatively showing that recurrent neural networks can be successfully used to improve color consistency in video colorization.
Supported by Amazon Cloud Credits for Research.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Baldassarre, F., MorÃn, D.G., Rodés-Guirao, L.: Deep koalarization: image colorization using CNNS and inception-ResNet-v2. CoRR, abs/1712.03400 (2017)
Cheng, Z., Yang, Q., Sheng, B.: Deep colorization. CoRR, abs/1605.00075 (2016)
Deshpande, A., Rock, J., Forsyth, D.: Learning large-scale automatic image colorization. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 567–575, December 2015
Iizuka, S., Simo-Serra, E., Ishikawa, H.: Let there be color!: joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification. ACM Trans. Graph. 35(4), 110:1–110:11 (2016)
Zhang, R., Isola, P., Efros, A.A.: Colorful image colorization. CoRR, abs/1603.08511 (2016)
Zhang, R., et al.: Real-time user-guided image colorization with learned deep priors. CoRR, abs/1705.02999 (2017)
Larsson, G., Maire, M., Shakhnarovich, G.: Learning representations for automatic colorization. CoRR, abs/1603.06668 (2016)
Hariharan, B., Arbeláez, P.A., Girshick, R.B., Malik, J.: Hypercolumns for object segmentation and fine-grained localization. CoRR, abs/1411.5752 (2014)
Charpiat, G., Hofmann, M., Schölkopf, B.: Automatic image colorization via multimodal predictions. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5304, pp. 126–139. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-88690-7_10
Chia, A.Y.-S., et al.: Semantic colorization with internet images. ACM Trans. Graph. 30(6), 156:1–156:8 (2011)
Gupta, R.K., Chia, A.Y.-S., Rajan, D., Ng, E.S., Zhiyong, H.: Image colorization using similar images. In: Proceedings of the 20th ACM International Conference on Multimedia, MM 2012, pp. 369–378. ACM, New York (2012)
Huang, Y.-C., Tung, Y.-S., Chen, J.-C., Wang, S.-W., Wu, J.-L.: An adaptive edge detection based colorization algorithm and its applications. In: Proceedings of the 13th Annual ACM International Conference on Multimedia, MULTIMEDIA 2005, pp. 351–354. ACM, New York (2005)
Irony, R., Cohen-Or, D., Lischinski, D.: Colorization by example. In: Proceedings of the Sixteenth Eurographics Conference on Rendering Techniques, EGSR 2005, pp. 201–210. Eurographics Association, Aire-la-Ville (2005)
Levin, A., Lischinski, D., Weiss, Y.: Colorization using optimization. ACM Trans. Graph. 23(3), 689–694 (2004)
Tai, Y.-W., Jia, J., Tang, C.-K.: Local color transfer via probabilistic segmentation by expectation-maximization. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), vol. 1, pp. 747–754, June 2005
Welsh, T., Ashikhmin, M., Mueller, K.: Transferring color to greyscale images. ACM Trans. Graph. 21(3), 277–280 (2002)
Yatziv, L., Sapiro, G.: Fast image and video colorization using chrominance blending. IEEE Trans. Image Process. 15(5), 1120–1129 (2006)
Luan, Q., Wen, F., Cohen-Or, D., Liang, L., Xu, Y.-Q., Shum, H.-Y.: Natural image colorization. In: Proceedings of the 18th Eurographics Conference on Rendering Techniques, EGSR 2007, pp. 309–320. Eurographics Association, Aire-la-Ville (2007)
Morimoto, Y., Taguchi, Y., Naemura, T.: Automatic colorization of grayscale images using multiple images on the web. In: SIGGRAPH 2009: Posters, SIGGRAPH 2009, pp. 32:1–32:1. ACM, New York (2009)
Qu, Y., Wong, T.-T., Heng, P.-A.: Manga colorization. ACM Trans. Graph. (SIGGRAPH 2006 issue) 25(3), 1214–1220 (2006)
Jiang, H., Sun, D., Jampani, V., Yang, M., Learned-Miller, E.G., Kautz, J.: Super slomo: high quality estimation of multiple intermediate frames for video interpolation. CoRR, abs/1712.00080 (2017)
Jiang, Y.-G., Wu, Z., Wang, J., Xue, X., Chang, S.-F.: Exploiting feature and class relationships in video categorization with regularized deep neural networks. IEEE Trans. Pattern Anal. Mach. Intell. 40(2), 352–364 (2018)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. CoRR, abs/1412.6980 (2014)
Ratliff, N.D., Silver, D., Bagnell, J.A.: Learning to search: functional gradient techniques for imitation learning. Auton. Robots 27(1), 25–53 (2009). https://doi.org/10.1007/s10514-009-9121-3
Ruderman, D.L., Cronin, T.W., Chiao, C.-C.: Statistics of cone responses to natural images: implications for visual coding. J. Opt. Soc. Am. A 15, 2036–2045 (1998)
Sheng, B., Sun, H., Magnor, M., Li, P.: Video colorization using parallel optimization in feature space. IEEE Trans. Circuits Syst. Video Technol. 24(3), 407–417 (2014)
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. CoRR, abs/1512.00567 (2015)
Tola, E., Lepetit, V., Fua, P.: A fast local descriptor for dense matching. In: 2008 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8, June 2008
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Wijesinghe, T., Abeysinghe, C., Wijayakoon, C., Jayathilake, L., Thayasivam, U. (2020). FlowChroma - A Deep Recurrent Neural Network for Video Colorization. In: Campilho, A., Karray, F., Wang, Z. (eds) Image Analysis and Recognition. ICIAR 2020. Lecture Notes in Computer Science(), vol 12131. Springer, Cham. https://doi.org/10.1007/978-3-030-50347-5_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-50347-5_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-50346-8
Online ISBN: 978-3-030-50347-5
eBook Packages: Computer ScienceComputer Science (R0)