Skip to main content

Dive into Coarse-to-Fine Strategy in Single Image Deblurring

  • Conference paper
  • First Online:
MultiMedia Modeling (MMM 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14554))

Included in the following conference series:

  • 355 Accesses

Abstract

The coarse-to-fine approach has gained significant popularity in the design of networks for single image deblurring. Traditional methods used to employ U-shaped networks with a single encoder and decoder, which may not adequately capture complex motion blur patterns. Inspired by the concept of multi-task learning, we dive into the coarse-to-fine strategy and propose an all-direction, multi-input and multi-output network for image deblurring (ADMMDeblur). ADMMDeblur has two distinct features. Firstly, it employs four decoders, each generating a unique residual representing a specific motion direction. This enables the network to effectively address motion blur in all directions within a two-dimensional (2D) scene. Secondly, the decoders utilize kernel rotation and sharing, which ensures the decoders do not separate unnecessary components. Consequently, the network exhibits enhanced efficiency and deblurring performance while requiring fewer parameters. Extensive experiments conducted on the GoPro and HIDE datasets demonstrate that our proposed network achieves better performance in deblurring accuracy and model size compared to existing well-performing methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chen, L., Lu, X., Zhang, J., Chu, X., Chen, C.: HINet: half instance normalization network for image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 182–192 (2021)

    Google Scholar 

  2. Cho, S.J., Ji, S.W., Hong, J.P., Jung, S.W., Ko, S.J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4641–4650 (2021)

    Google Scholar 

  3. Dai, J., Li, Y., He, K., Sun, J.: R-FCN: object detection via region-based fully convolutional networks. In: Advances in Neural Information Processing Systems, vol. 29 (2016)

    Google Scholar 

  4. Franke, U., Joos, A.: Real-time stereo vision for urban traffic scene understanding. In: Proceedings of the IEEE Intelligent Vehicles Symposium 2000 (Cat. No. 00TH8511), pp. 273–278. IEEE (2000)

    Google Scholar 

  5. Gao, H., Tao, X., Shen, X., Jia, J.: Dynamic scene deblurring with parameter selective sharing and nested skip connections. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3848–3856 (2019)

    Google Scholar 

  6. Hradis, M., Kotera, J., Zemcík, P., Sroubek, F.: Convolutional neural networks for direct text deblurring, vol. 10, September 2015. https://doi.org/10.5244/C.29.6

  7. Hui, Z., Chakrabarti, A., Sunkavalli, K., Sankaranarayanan, A.C.: Learning to separate multiple illuminants in a single image. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3780–3789 (2019)

    Google Scholar 

  8. Ignatov, A., Kobyshev, N., Timofte, R., Vanhoey, K., Van Gool, L.: DSLR-quality photos on mobile devices with deep convolutional networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3277–3285 (2017)

    Google Scholar 

  9. Ji, S.W., et al.: XYDeblur: divide and conquer for single image deblurring. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17421–17430 (2022)

    Google Scholar 

  10. Jiao, J., Cao, Y., Song, Y., Lau, R.: Look deeper into depth: monocular depth estimation with semantic booster and attention-driven loss. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11219, pp. 55–71. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01267-0_4

    Chapter  Google Scholar 

  11. Kendall, A., Gal, Y., Cipolla, R.: Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7482–7491 (2018)

    Google Scholar 

  12. Krishnan, D., Tay, T., Fergus, R.: Blind deconvolution using a normalized sparsity measure. In: CVPR 2011, pp. 233–240. IEEE (2011)

    Google Scholar 

  13. Li, Z., Snavely, N.: Learning intrinsic image decomposition from watching the world. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9039–9048 (2018)

    Google Scholar 

  14. Liu, C., Ke, W., Qin, F., Ye, Q.: Linear span network for object skeleton detection. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11206, pp. 136–151. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01216-8_9

    Chapter  Google Scholar 

  15. Nah, S., Hyun Kim, T., Mu Lee, K.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3883–3891 (2017)

    Google Scholar 

  16. Ott, J., Linstead, E., LaHaye, N., Baldi, P.: Learning in the machine: to share or not to share? Neural Netw. 126, 235–249 (2020)

    Article  Google Scholar 

  17. Park, D., Kang, D.U., Kim, J., Chun, S.Y.: Multi-temporal recurrent neural networks for progressive non-uniform single image deblurring with incremental temporal training. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12351, pp. 327–343. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58539-6_20

    Chapter  Google Scholar 

  18. Purohit, K., Rajagopalan, A.: Region-adaptive dense network for efficient motion deblurring. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 11882–11889 (2020)

    Google Scholar 

  19. Shen, Z., et al.: Human-aware motion deblurring. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5572–5581 (2019)

    Google Scholar 

  20. Sun, J., Cao, W., Xu, Z., Ponce, J.: Learning a convolutional neural network for non-uniform motion blur removal. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 769–777 (2015)

    Google Scholar 

  21. Tao, X., Gao, H., Shen, X., Wang, J., Jia, J.: Scale-recurrent network for deep image deblurring. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8174–8182 (2018)

    Google Scholar 

  22. Thorpe, C., Li, F., Li, Z., Yu, Z., Saunders, D., Yu, J.: A coprime blur scheme for data security in video surveillance. IEEE Trans. Pattern Anal. Mach. Intell. 35(12), 3066–3072 (2013)

    Article  Google Scholar 

  23. Whyte, O., Sivic, J., Zisserman, A.: Deblurring shaken and partially saturated images. Int. J. Comput. Vision 110, 185–201 (2014)

    Article  Google Scholar 

  24. Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5728–5739 (2022)

    Google Scholar 

  25. Zamir, S.W., et al.: Multi-stage progressive image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14821–14831 (2021)

    Google Scholar 

  26. Zhang, H., Dai, Y., Li, H., Koniusz, P.: Deep stacked hierarchical multi-patch network for image deblurring. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5978–5986 (2019)

    Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China under Grant 62176161, and the Scientific Research and Development Foundations of Shenzhen under Grant JCYJ20220818100005011 and 20200813144831001.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jianping Luo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, Z., Luo, J. (2024). Dive into Coarse-to-Fine Strategy in Single Image Deblurring. In: Rudinac, S., et al. MultiMedia Modeling. MMM 2024. Lecture Notes in Computer Science, vol 14554. Springer, Cham. https://doi.org/10.1007/978-3-031-53305-1_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-53305-1_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-53304-4

  • Online ISBN: 978-3-031-53305-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics