Skip to main content

Learned Variational Video Color Propagation

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Abstract

In this paper, we propose a novel method for color propagation that is used to recolor gray-scale videos (e.g. historic movies). Our energy-based model combines deep learning with a variational formulation. At its core, the method optimizes over a set of plausible color proposals that are extracted from motion and semantic feature matches, together with a learned regularizer that resolves color ambiguities by enforcing spatial color smoothness. Our approach allows interpreting intermediate results and to incorporate extensions like using multiple reference frames even after training. We achieve state-of-the-art results on a number of standard benchmark datasets with multiple metrics and also provide convincing results on real historical videos – even though such types of video are not present during training. Moreover, a user evaluation shows that our method propagates initial colors more faithfully and temporally consistent.

Markus Hofinger and Erich Kobler are shared co-first authors. Source code can be found on https://github.com/VLOGroup/LVVCP.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We thank DVCP authors for the data. As their results exclude the DAVIS-2017-val video mallard-water, we also omit it for fair comparison resulting in 14 sequences.

References

  1. An, X., Pellacini, F.: Appprop: all-pairs appearance-space edit propagation. In: ACM SIGGRAPH, pp. 1–9 (2008)

    Google Scholar 

  2. Anwar, S., Tahir, M., Li, C., Mian, A., Khan, F.S., Muzaffar, A.W.: Image colorization: a survey and dataset. arXiv:2008.10774 (2020)

  3. Barnes, C., Shechtman, E., Finkelstein, A., Goldman, D.B.: PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Trans. Graph. 28(3), 24 (2009)

    Article  Google Scholar 

  4. Barron, J.T., Poole, B.: The fast bilateral solver. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 617–632. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46487-9_38

    Chapter  Google Scholar 

  5. Bugeau, A., Ta, V.T., Papadakis, N.: Variational exemplar-based image colorization. IEEE Trans. Image Process. 23(1), 298–307 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  6. Cao, Y., Zhou, Z., Zhang, W., Yu, Y.: Unsupervised diverse colorization via generative adversarial networks. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 151–166 (2017)

    Google Scholar 

  7. Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40, 120–145 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  8. Chambolle, A., Pock, T.: An introduction to continuous optimization for imaging. Acta Numer 25, 161–319 (2016). https://doi.org/10.1017/S096249291600009X

    Article  MathSciNet  MATH  Google Scholar 

  9. Charpiat, G., Hofmann, M., Schölkopf, B.: Automatic image colorization via multimodal predictions. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5304, pp. 126–139. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-88690-7_10

    Chapter  Google Scholar 

  10. Chen, X., Zou, D., Zhao, Q., Tan, P.: Manifold preserving edit propagation. ACM Trans. Graph. 31(6), 1–7 (2012)

    Google Scholar 

  11. Cheng, Z., Yang, Q., Sheng, B.: Deep colorization. In: International Conference on Computer Vision, pp. 415–423 (2015)

    Google Scholar 

  12. Deshpande, A., Lu, J., Yeh, M.C., Jin Chong, M., Forsyth, D.: Learning diverse image colorization. In: Conference on Computer Vision and Pattern Recognition (2017)

    Google Scholar 

  13. Deshpande, A., Rock, J., Forsyth, D.: Learning large-scale automatic image colorization. In: International Conference on Computer Vision, pp. 567–575 (2015)

    Google Scholar 

  14. Endo, Y., Iizuka, S., Kanamori, Y., Mitani, J.: Deepprop: extracting deep features from a single image for edit propagation. In: Computer Graphics Forum, pp. 189–201 (2016)

    Google Scholar 

  15. Faridul, H.S., Pouli, T., Chamaret, C., Stauder, J., Trémeau, A., Reinhard, E., et al.: A survey of color mapping and its applications. In: Eurographics (State of the Art Reports), vol. 3 (2014)

    Google Scholar 

  16. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  17. He, M., Chen, D., Liao, J., Sander, P.V., Yuan, L.: Deep exemplar-based colorization. ACM Trans. Graph. 37(4), 47 (2018)

    Article  Google Scholar 

  18. Henisch, H.K., Henisch, B.A.: The Painted photograph, 1839–1914: Origins, Techniques, Aspirations. Pennsylvania State University Press University Park (1996)

    Google Scholar 

  19. Hofinger, M., Bulò, S.R., Porzi, L., Knapitsch, A., Pock, T., Kontschieder, P.: Improving optical flow on a pyramid level. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12373, pp. 770–786. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58604-1_46

    Chapter  Google Scholar 

  20. Hurwitz, M.: Real war: how Peter Jackson’s they shall not grow old breathed life into 100-year-old archival footage (2019). https://www.studiodaily.com/2019/05/real-war-peter-jacksons-shall-not-grow-old-breathed-life-100-year-old-archival-footage/

  21. Iizuka, S., Simo-Serra, E.: Deepremaster: temporal source-reference attention networks for comprehensive video enhancement. ACM Trans. Graph. 38(6), 1–13 (2019)

    Article  Google Scholar 

  22. Iizuka, S., Simo-Serra, E., Ishikawa, H.: Let there be color! joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification. ACM Trans. Graph. 35(4), 1–11 (2016)

    Article  Google Scholar 

  23. Irony, R., Cohen-Or, D., Lischinski, D.: Colorization by example. In: Eurographics Symposium on Rendering, pp. 201–210 (2005)

    Google Scholar 

  24. Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017)

    Google Scholar 

  25. Jampani, V., Gadde, R., Gehler, P.V.: Video propagation networks. In: Conference on Computer Vision and Pattern Recognition, pp. 451–461 (2017)

    Google Scholar 

  26. Kobler, E., Effland, A., Kunisch, K., Pock, T.: Total deep variation for linear inverse problems. In: IEEE Conference on Computer Vision and Pattern Recognition (2020)

    Google Scholar 

  27. Kumar, M., Weissenborn, D., Kalchbrenner, N.: Colorization transformer. In: International Conference on Learning Representations (2021)

    Google Scholar 

  28. Larsson, G., Maire, M., Shakhnarovich, G.: Learning representations for automatic colorization. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 577–593. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_35

    Chapter  Google Scholar 

  29. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)

    Article  Google Scholar 

  30. Lei, C., Chen, Q.: Fully automatic video colorization with self-regularization and diversity. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3753–3761 (2019)

    Google Scholar 

  31. Leitner, D.: The documentary masterpiece that is Peter Jackson’s they shall not grow old (2018). https://filmmakermagazine.com/106589-the-documentary-masterpiece-that-is-peter-jacksons-they-shall-not-grow-old

  32. Levin, A., Lischinski, D., Weiss, Y.: Colorization using optimization. In: ACM SIGGRAPH 2004 Papers, pp. 689–694 (2004)

    Google Scholar 

  33. Liao, J., Yao, Y., Yuan, L., Hua, G., Kang, S.B.: Visual attribute transfer through deep image analogy. arXiv preprint arXiv:1705.01088 (2017)

  34. Lin, T.-S.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48

    Chapter  Google Scholar 

  35. Luan, Q., Wen, F., Cohen-Or, D., Liang, L., Xu, Y.Q., Shum, H.Y.: Natural image colorization. In: Eurographics Conference on Rendering Techniques, pp. 309–320 (2007)

    Google Scholar 

  36. Luo, M., Cui, G., Rigg, B.: The development of the CIE 2000 colour-difference formula: Ciede 2000. Color Res. Appl. 26, 340–350 (2001). https://doi.org/10.1002/col.1049

    Article  Google Scholar 

  37. Meister, S., Hur, J., Roth, S.: Unflow: unsupervised learning of optical flow with a bidirectional census loss. In: AAAI (2018)

    Google Scholar 

  38. Meyer, S., Cornillère, V., Djelouah, A., Schroers, C., Gross, M.: Deep video color propagation. In: British Machine Vision Conference (2018)

    Google Scholar 

  39. Mouzon, T., Pierre, F., Berger, M.O.: Joint CNN and variational model for fully-automatic image colorization. In: International Conference on Scale Space and Variational Methods in Computer Vision, pp. 535–546 (2019)

    Google Scholar 

  40. Pierre, F., Aujol, J.F., Bugeau, A., Papadakis, N., Ta, V.T.: Luminance-chrominance model for image colorization. SIAM J. Imaging Sci. 8(1), 536–563 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  41. Pierre, F., Aujol, J.F., Bugeau, A., Ta, V.T.: Interactive video colorization within a variational framework. SIAM J. Imaging Sci. 10(4), 2293–2325 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  42. Pierre, F., Aujol, J.F.: Recent Approaches for Image Colorization (2020). https://hal.archives-ouvertes.fr/hal-02965137

  43. Pierre, F., Aujol, J.-F., Bugeau, A., Ta, V.-T.: A unified model for image colorization. In: Agapito, L., Bronstein, M.M., Rother, C. (eds.) ECCV 2014. LNCS, vol. 8927, pp. 297–308. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-16199-0_21

    Chapter  Google Scholar 

  44. Pinetz, T., Kobler, E., Pock, T., Effland, A.: Shared prior learning of energy-based models for image reconstruction. arXiv:2011.06539 (2020)

  45. Pont-Tuset, J., et al.: The 2017 davis challenge on video object segmentation. arXiv:1704.00675 (2017)

  46. Qu, Y., Wong, T.T., Heng, P.A.: Manga colorization. ACM Trans. Graph. 25(3), 1214–1220 (2006)

    Article  Google Scholar 

  47. Reinhard, E., Adhikhmin, M., Gooch, B., Shirley, P.: Color transfer between images. IEEE Comput. Graphics Appl. 21(5), 34–41 (2001)

    Article  Google Scholar 

  48. Royer, A., Kolesnikov, A., Lampert, C.H.: Probabilistic image colorization. In: British Machine Vision Conference (2018)

    Google Scholar 

  49. Russakovsky, O., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vision 115(3), 211–252 (2015)

    Google Scholar 

  50. Shrivastava, A., Gupta, A., Girshick, R.: Training region-based object detectors with online hard example mining. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 761–769 (2016)

    Google Scholar 

  51. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations (2015)

    Google Scholar 

  52. Su, J.W., Chu, H.K., Huang, J.B.: Instance-aware image colorization. In: Conference on Computer Vision and Pattern Recognition, pp. 7968–7977 (2020)

    Google Scholar 

  53. Sỳkora, D., Buriánek, J., Žára, J.: Unsupervised colorization of black-and-white cartoons. In: International Symposium on Non-photorealistic Animation and Rendering, pp. 121–127 (2004)

    Google Scholar 

  54. Teed, Z., Deng, J.: RAFT: recurrent all-pairs field transforms for optical flow. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12347, pp. 402–419. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58536-5_24

    Chapter  Google Scholar 

  55. Ulyanov, D., Vedaldi, A., Lempitsky, V.S.: Instance normalization: the missing ingredient for fast stylization. arXiv:1607.08022 (2016)

  56. Vondrick, C., Shrivastava, A., Fathi, A., Guadarrama, S., Murphy, K.: Tracking emerges by colorizing videos. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11217, pp. 402–419. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01261-8_24

    Chapter  Google Scholar 

  57. Welsh, T., Ashikhmin, M., Mueller, K.: Transferring color to greyscale images. In: Conference on Computer Graphics and Interactive Techniques, pp. 277–280 (2002)

    Google Scholar 

  58. Xu, L., Yan, Q., Jia, J.: A sparse control model for image and video editing. ACM Trans. Graph. 32(6), 1–10 (2013)

    Google Scholar 

  59. Yatziv, L., Sapiro, G.: Fast image and video colorization using chrominance blending. IEEE Trans. Image Process. 15(5), 1120–1129 (2006)

    Article  Google Scholar 

  60. Zhang, B., et al.: Deep exemplar-based video colorization. In: Conference on Computer Vision and Pattern Recognition, pp. 8052–8061 (2019)

    Google Scholar 

  61. Zhang, R., Isola, P., Efros, A.A.: Colorful image colorization. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 649–666. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46487-9_40

    Chapter  Google Scholar 

  62. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and pattern Recognition, pp. 586–595 (2018)

    Google Scholar 

  63. Zhang, R., et al.: Real-time user-guided image colorization with learned deep priors. ACM Trans. Graph. 36(4), 1–11 (2017)

    Google Scholar 

Download references

Acknowledgement

This work was supported by the FFG-Program BRIDGE with short title RE:Color (No. 877161). Alexander Effland was also supported by the German Research Foundation under Germany’s Excellence Strategy EXC-2047/1–390685813 and EXC2151-390873048.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Markus Hofinger .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 16637 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hofinger, M., Kobler, E., Effland, A., Pock, T. (2022). Learned Variational Video Color Propagation. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13683. Springer, Cham. https://doi.org/10.1007/978-3-031-20050-2_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20050-2_30

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20049-6

  • Online ISBN: 978-3-031-20050-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics