Skip to main content

Efficient Video Deblurring Guided by Motion Magnitude

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13679))

Included in the following conference series:

Abstract

Video deblurring is a highly under-constrained problem due to the spatially and temporally varying blur. An intuitive approach for video deblurring includes two steps: a) detecting the blurry region in the current frame; b) utilizing the information from clear regions in adjacent frames for current frame deblurring. To realize this process, our idea is to detect the pixel-wise blur level of each frame and combine it with video deblurring. To this end, we propose a novel framework that utilizes the motion magnitude prior (MMP) as guidance for efficient deep video deblurring. Specifically, as the pixel movement along its trajectory during the exposure time is positively correlated to the level of motion blur, we first use the average magnitude of optical flow from the high-frequency sharp frames to generate the synthetic blurry frames and their corresponding pixel-wise motion magnitude maps. We then build a dataset including the blurry frame and MMP pairs. The MMP is then learned by a compact CNN by regression. The MMP consists of both spatial and temporal blur level information, which can be further integrated into an efficient recurrent neural network (RNN) for video deblurring. We conduct intensive experiments to validate the effectiveness of the proposed methods on the public datasets. Our codes are available at https://github.com/sollynoay/MMP-RNN.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Argaw, D.M., Kim, J., Rameau, F., Cho, J.W., Kweon, I.S.: Optical flow estimation from a single motion-blurred image. In: AAAI, pp. 891–900 (2021)

    Google Scholar 

  2. Bar, L., Berkels, B., Rumpf, M., Sapiro, G.: A variational framework for simultaneous motion estimation and restoration of motion-blurred video. In: ICCV, pp. 1–8 (2007)

    Google Scholar 

  3. Charbonnier, P., Blanc-Feraud, L., Aubert, G., Barlaud, M.: Two deterministic half-quadratic regularization algorithms for computed imaging. In: ICIP, pp. 168–172 (1994)

    Google Scholar 

  4. Cho, S., Matsushita, Y., Lee, S.: Removing non-uniform motion blur from images. In: ICCV, pp. 1–8 (2007)

    Google Scholar 

  5. Couzinié-Devy, F., Sun, J., Alahari, K., Ponce, J.: Learning to estimate and remove non-uniform image blur. In: CVPR, pp. 1075–1082 (2013)

    Google Scholar 

  6. Dai, J., et al.: Deformable convolutional networks. In: ICCV, pp. 764–773 (2017)

    Google Scholar 

  7. Gao, H., Tao, X., Shen, X., Jia, J.: Dynamic scene deblurring with parameter selective sharing and nested skip connections. In: CVPR, pp. 3848–3856 (2019)

    Google Scholar 

  8. Gong, D., et al.: From motion blur to motion flow: a deep learning solution for removing heterogeneous motion blur. In: CVPR, pp. 3806–3815 (2017)

    Google Scholar 

  9. Hirsch, M., Schuler, C.J., Harmeling, S., Schölkopf, B.: Fast removal of non-uniform camera shake. In: ICCV, pp. 463–470 (2011)

    Google Scholar 

  10. Horé, A., Ziou, D.: Image quality metrics: PSNR vs. SSIM. In: ICPR, pp. 2366–2369 (2010)

    Google Scholar 

  11. Jiang, H., Zheng, Y.: Learning to see moving objects in the dark. In: ICCV, pp. 7324–7333 (2019)

    Google Scholar 

  12. Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: CVPR, pp. 5426–5434 (2015)

    Google Scholar 

  13. Kim, T.H., Lee, K.M., Scholkopf, B., Hirsch, M.: Online video deblurring via dynamic temporal blending network. In: ICCV, pp. 4038–4047 (2017)

    Google Scholar 

  14. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2015)

    Google Scholar 

  15. Li, D., et al.: ARVo: learning all-range volumetric correspondence for video deblurring. In: CVPR, pp. 7721–7731, June 2021

    Google Scholar 

  16. Loshchilov, I., Hutter, F.: SGDR: stochastic gradient descent with warm restarts. In: ICLR (2017)

    Google Scholar 

  17. Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: CVPR, pp. 3883–3891 (2017)

    Google Scholar 

  18. Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: CVPR, pp. 8102–8111 (2019)

    Google Scholar 

  19. Pan, J., Bai, H., Tang, J.: Cascaded deep video deblurring using temporal sharpness prior. In: CVPR, pp. 3043–3051 (2020)

    Google Scholar 

  20. Pan, J., Sun, D., Pfister, H., Yang, M.: Blind image deblurring using dark channel prior. In: CVPR, pp. 1628–1636 (2016)

    Google Scholar 

  21. Portz, T., Zhang, L., Jiang, H.: Optical flow in the presence of spatially-varying motion blur. In: CVPR, pp. 1752–1759 (2012)

    Google Scholar 

  22. Ren, W., Pan, J., Cao, X., Yang, M.H.: Video deblurring via semantic segmentation and pixel-wise non-linear kernel. In: ICCV, pp. 1077–1085 (2017)

    Google Scholar 

  23. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  24. Shan, Q., Jia, J., Agarwala, A.: High-quality motion deblurring from a single image. ACM Trans. Graph. 27(3), 1–10 (2008)

    Article  Google Scholar 

  25. Shen, Z., Wang, W., Shen, J., Ling, H., Xu, T., Shao, L.: Human-aware motion deblurring. In: ICCV (2019)

    Google Scholar 

  26. Shi, J., Xu, L., Jia, J.: Discriminative blur detection features. In: CVPR, pp. 2965–2972 (2014)

    Google Scholar 

  27. Son, H., Lee, J., Lee, J., Cho, S., Lee, S.: Recurrent Video Deblurring with Blur-invariant Motion Estimation and Pixel volumes. ACM Trans. Graph (2021)

    Google Scholar 

  28. Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: CVPR, pp. 237–246 (2017)

    Google Scholar 

  29. Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: CVPR, pp. 3606–3615 (2020)

    Google Scholar 

  30. Sun, D., Yang, X., Liu, M.Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: CVPR (2018)

    Google Scholar 

  31. Sun, J., Cao, W., Xu, Z., Ponce, J.: Learning a convolutional neural network for non-uniform motion blur removal. In: CVPR, pp. 769–777 (2015)

    Google Scholar 

  32. Tai, Y.W., Chen, X., Kim, S., Kim, S.J., Li, F., Yang, J., Yu, J., Matsushita, Y., Brown, M.S.: Nonlinear camera response functions and image deblurring: theoretical analysis and practice. IEEE Trans. Pattern Anal. Mach. Intell. 35(10), 2498–2512 (2013)

    Article  Google Scholar 

  33. Tao, X., Gao, H., Shen, X., Wang, J., Jia, J.: Scale-recurrent network for deep image deblurring. In: CVPR, pp. 8174–8182 (2018)

    Google Scholar 

  34. Teed, Z., Deng, J.: RAFT: recurrent all-pairs field transforms for optical flow. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12347, pp. 402–419. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58536-5_24

    Chapter  Google Scholar 

  35. Wang, X., Chan, K.C., Yu, K., Dong, C., Loy, C.C.: EDVR: video restoration with enhanced deformable convolutional networks. In: CVPRW (2019)

    Google Scholar 

  36. Woo, S., Park, J., Lee, J.-Y., Kweon, I.S.: CBAM: convolutional block attention module. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 3–19. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_1

    Chapter  Google Scholar 

  37. Wulff, J., Black, M.J.: Modeling blurred video with layers. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8694, pp. 236–252. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10599-4_16

    Chapter  Google Scholar 

  38. Xu, L., Zheng, S., Jia, J.: Unnatural l0 sparse representation for natural image deblurring. In: CVPR, pp. 1107–1114 (2013)

    Google Scholar 

  39. Yan, Y., Ren, W., Guo, Y., Wang, R., Cao, X.: Image deblurring via extreme channels prior. In: CVPR, pp. 6978–6986 (2017)

    Google Scholar 

  40. Zamir, S.W., et al.: Multi-stage progressive image restoration. In: CVPR (2021)

    Google Scholar 

  41. Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR, pp. 2472–2481 (2018)

    Google Scholar 

  42. Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image restoration. IEEE Trans. Pattern Anal. Mach. Intell. (2020)

    Google Scholar 

  43. Zhong, Z., Gao, Y., Zheng, Y., Zheng, B.: Efficient spatio-temporal recurrent neural network for video deblurring. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12351, pp. 191–207. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58539-6_12

    Chapter  Google Scholar 

  44. Zhou, S., Zhang, J., Pan, J., Xie, H., Zuo, W., Ren, J.: Spatio-temporal filter adaptive network for video deblurring. In: ICCV, pp. 2482–2491 (2019)

    Google Scholar 

Download references

Acknowledgements

This paper is supported by JSPS KAKENHI Grant Numbers 22H00529 and 20H05951.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yusheng Wang .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (mp4 6306 KB)

Supplementary material 2 (mp4 3789 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, Y. et al. (2022). Efficient Video Deblurring Guided by Motion Magnitude. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13679. Springer, Cham. https://doi.org/10.1007/978-3-031-19800-7_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19800-7_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19799-4

  • Online ISBN: 978-3-031-19800-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics