Video flickering removal using temporal reconstruction optimization


In this paper, we introduce an approach to remove the flickers in the videos, and the flickers are caused by applying image-based processing methods to original videos frame by frame. First, we propose a multi-frame based video flicker removal method. We utilize multiple temporally corresponding frames to reconstruct the flickering frame. Compared with traditional methods, which reconstruct the flickering frame just from an adjacent frame, reconstruction with multiple temporally corresponding frames reduces the warp inaccuracy. Then, we optimize our video flickering method from following aspects. On the one hand, we detect the flickering frames in the video sequence with temporal consistency metrics, and just reconstructing the flickering frames can accelerate the algorithm greatly. On the other hand, we just choose the previous temporally corresponding frames to reconstruct the output frames. We also accelerate our video flicker removal with GPU. Qualitative experimental results demonstrate the efficiency of our proposed video flicker method. With algorithmic optimization and GPU acceleration, the time complexity of our method also outperforms traditional video temporal coherence methods.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9


  1. 1.

    Achanta R, Shaji A, Smith K, Lucchi A, Fua P, Süsstrunk S (2012) SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans Pattern Anal Mach Intell 34(11):2274–2282

    Article  Google Scholar 

  2. 2.

    Aydin T, Stefanoski N, Croci S, Gross M, Smolic A (2014) Temporally coherent local tone mapping of HDR video. ACM Trans Graph 33(6):196:1–196:13

    Article  Google Scholar 

  3. 3.

    Bell S, Bala K, Snavely N (2014) Intrinsic images in the wild. ACM Trans Graph 33(4):159:1–159:12

    Article  Google Scholar 

  4. 4.

    Bhattacharya S, Venkatesh KS, Gupta S (2016) Restoration of scene flicker using video decomposition. In: International conference on signal processing, computing and control, pp 396–400

  5. 5.

    Bonneel N, Sunkavalli K, Paris S, Pfister H (2013) Example-based video color grading. ACM Trans Graph 32(4):39:1–39:12

    Article  Google Scholar 

  6. 6.

    Bonneel N, Sunkavalli K, Tompkin J, Sun D, Paris S, Pfister H (2014) Interactive intrinsic video editing. ACM Trans Graph 33(6):197:1–197:10

    Article  Google Scholar 

  7. 7.

    Bonneel N, Tompkin J, Sunkavalli K, Sun D, Paris S, Pfister H (2015) Blind video temporal consistency. ACM Trans Graph 34(6):196:1–196:9

    Article  Google Scholar 

  8. 8.

    Dan BG, Lischinski D (2013) Optimizing color consistency in photo collections. ACM Trans Graph 32(4):38

    Google Scholar 

  9. 9.

    Dong X, Bonev B, Zhu Y, Yuille AL (2015) Region-based temporally consistent video post-processing. In: IEEE conference on computer vision and pattern recognition, pp 714–722

  10. 10.

    Durand F, Dorsey J (2002) Fast bilateral filtering for the display of high-dynamic-range images. In: Conference on computer graphics and interactive techniques, pp 257–266

  11. 11.

    Efron B, Hastie T, Johnstone I, Tibshirani R (2004) Least angle regression. Ann Stat 32(2):407–451

    MathSciNet  Article  Google Scholar 

  12. 12.

    Farbman Z, Lischinski D (2011) Tonal stabilization of video. ACM Trans Graph 30(4):89:1–89:10

    Article  Google Scholar 

  13. 13.

    Gong W, Wang W, Li W, Tang S (2014) Temporal consistency based method for blind video deblurring. In: International conference on pattern recognition, pp 861–864

  14. 14.

    Guthier B, Kopf S, Eble M, Effelsberg W (2011) Flicker reduction in tone mapped high dynamic range video. In: Proceedings of the SPIE, vol 7866, pp 1–15

  15. 15.

    Hsu CY, Lu CS, Pei SC (2007) Video halftoning preserving temporal consistency. In: IEEE international conference on multimedia and expo, pp 1938–1941

  16. 16.

    Hsu E, Mertens T, Paris S, Avidan S, Durand F (2008) Light mixture estimation for spatially varying white balance. ACM Trans Graph 27(3):70:1–70:7

    Article  Google Scholar 

  17. 17.

    Huang CR, Chiu KC, Chen CS (2011) Temporal color consistency-based video reproduction for dichromats. IEEE Trans Multimedia 13(5):950–960

    Article  Google Scholar 

  18. 18.

    Kalantari NK, Shechtman E, Barnes C, Darabi S, Goldman DB, Sen P (2013) Patch-based high dynamic range video. ACM Trans Graph 32(6):202:1–202:8

    Article  Google Scholar 

  19. 19.

    Kanj A, Talbot H, Luparello RR (2017) Flicker removal and superpixel-based motion tracking for high speed videos. In: 2017 IEEE international conference on image processing (ICIP), pp 245–249

  20. 20.

    Lang M, Wang O, Aydin T, Smolic A, Gross M (2012) Practical temporal consistency for image-based graphics applications. ACM Trans Graph 31(4):34:1–34:8

    Article  Google Scholar 

  21. 21.

    Liu C, Yuen J, Torralba A (2011) SIFT flow: dense correspondence across scenes and its applications. IEEE Trans Pattern Anal Mach Intell 33(5):978–994

    Article  Google Scholar 

  22. 22.

    Liu Y, Nie L, Han L, Zhang L, Rosenblum DS (2015) Action2Activity: recognizing complex activities from sensor data. In: International conference on artificial intelligence, pp 1617–1623

  23. 23.

    Liu Y, Nie L, Liu L, Rosenblum DS (2016) From action to activity: sensor-based activity recognition. Neurocomputing 181:108–115

    Article  Google Scholar 

  24. 24.

    Liu Y, Zhang L, Nie L, Yan Y, Rosenblum DS (2016) Fortune teller: predicting your career path. In: AAAI conference on artificial intelligence, pp 201–207

  25. 25.

    Mantiuk R, Daly S, Kerofsky L (2008) Display adaptive tone mapping. ACM Trans Graph 27(3):1–10

    Article  Google Scholar 

  26. 26.

    Oskam T, Hornung A, Sumner RW, Gross M (2012) Fast and stable color balancing for images and augmented reality. In: Second international conference on 3d imaging, modeling, processing, visualization and transmission, pp 49–56

  27. 27.

    Reso M, Jachalsky J, Rosenhahn B, Ostermann J (2018) Occlusion-aware method for temporally consistent superpixels. IEEE Trans Pattern Anal Mach Intell PP(99):1–1

    Google Scholar 

  28. 28.

    Shin DK, Yong MK, Park KT, Lee DS, Choi W, Moon YS (2014) Video dehazing without flicker artifacts using adaptive temporal average. In: The IEEE international symposium on consumer electronics, pp 1–2

  29. 29.

    Tang K, Yang J, Wang J (2014) Investigating haze-relevant features in a learning framework for image dehazing. In: IEEE conference on computer vision and pattern recognition, pp 295–302

  30. 30.

    Wang CM, Huang YH, Huang ML (2006) An effective algorithm for image sequence color transfer. Math Comput Model 44(7):608–627

    Article  Google Scholar 

  31. 31.

    Yao CH, Chang CY, Chien SY (2017) Occlusion-aware video temporal consistency. In: ACM multimedia, pp 777–785

  32. 32.

    Ye G, Garces E, Liu Y, Dai Q, Gutierrez D (2014) Intrinsic video and applications. ACM Trans Graph 33(4):80:1–80:11

    Article  Google Scholar 

  33. 33.

    Zeng H, Ma KK (2012) Content-adaptive temporal consistency enhancement for depth video. In: IEEE international conference on image processing, pp 3017–3020

  34. 34.

    Zhao X, Ding W, Liu C, Li H (2018) Haze removal for unmanned aerial vehicle aerial video based on spatial-temporal coherence optimisation. IET Image Process 12(1):88–97

    Article  Google Scholar 

Download references


This work was supported in part by the National Natural Science Foundation of China under Grant 61672228, Grant 61872241, Grant 61572316, and Grant 61370174, in part by the National Key Research and Development Program of China under Grant 2017YFE0104000 and Grant 2016YFC1300302, in part by the Macau Science and Technology Development Fund under Grant 0027/2018/A1, in part by the Science and Technology Commission of Shanghai Municipality under Grant 18410750700, Grant 17411952600, and Grant 16DZ0501100, and in part by the Shanghai Automotive Industry Science and Technology Development Foundation under Grant 1837.

Author information



Corresponding authors

Correspondence to Zhihua Chen or Bin Sheng.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Li, C., Chen, Z., Sheng, B. et al. Video flickering removal using temporal reconstruction optimization. Multimed Tools Appl 79, 4661–4679 (2020).

Download citation


  • Video processing
  • Flickering removal
  • Multiple frames
  • Temporal coherence
  • Spatial coherence