Skip to main content
Log in

Enhanced context encoding for single image raindrop removal

  • Article
  • Published:
Science China Technological Sciences Aims and scope Submit manuscript

Abstract

Despite the great success achieved by convolutional neural networks in addressing the raindrop removal problem, the still relatively blurry results call for better problem formulations and network architectures. In this paper, we revisited the rainy-to-clean translation networks and identified the issue of imbalanced distribution between raindrops and varied background scenes. None of the existing raindrop removal networks consider this underlying issue, thus resulting in the learned representation biased towards modeling raindrop regions while paying less attention to the important contextual regions. With the aim of learning a more powerful raindrop removal model, we propose learning a soft mask map explicitly for mitigating the imbalanced distribution problem. Specifically, a two stage network is designed with the first stage generating the soft masks, which helps learn a context-enhanced representation in the second stage. To better model the heterogeneously distributed raindrops, a multi-scale dense residual block is designed to construct the hierarchical rainy-to-clean image translation network. Comprehensive experimental results demonstrate the significant superiority of the proposed models over state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Qian R, Tan R T, Yang W, et al. Attentive generative adversarial network for raindrop removal from a single image. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, 2018. 2482–2491

    Google Scholar 

  2. Garg K, Nayar S K. Vision and rain. Int J Comput Vis, 2007, 75: 3–27

    Article  Google Scholar 

  3. Xu X B, Wang Z, Deng Y M. A software platform for vision-based UAV autonomous landing guidance based on markers estimation. Sci China Tech Sci, 2019, 62: 1825–1836

    Article  Google Scholar 

  4. Fu X, Huang J, Zeng D, et al. Removing rain from single images via a deep detail network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Honolulu, 2017. 3855–3863

    Google Scholar 

  5. Yang W, Tan R T, Feng J, et al. Deep joint rain detection and removal from a single image. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Honolulu, 2017. 1357–1366

    Google Scholar 

  6. Zhang H, Patel V M. Density-aware single image de-raining using a multi-stream dense network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, 2018. 695–704

    Google Scholar 

  7. Jiang K, Wang Z, Yi P, et al. Multi-scale progressive fusion network for single image deraining. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, 2020. 8346–8355

    Google Scholar 

  8. Wang H, Xie Q, Zhao Q, et al. A model-driven deep neural network for single image rain removal. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, 2020. 3103–3112

    Google Scholar 

  9. Wang C, Xing X, Wu Y, et al. DCSFN: Deep cross-scale fusion network for single image rain removal. In: Proceedings of the ACM International Conference on Multimedia. Seattle, 2020. 1643–1651

    Google Scholar 

  10. Wang T, Yang X, Xu K, et al. Spatial attentive single-image deraining with a high quality real rain dataset. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, 2019. 12270–12279

    Google Scholar 

  11. Wang G, Sun C, Sowmya A. ERL-Net: Entangled representation learning for single image de-raining. In: Proceedings of the IEEE International Conference on Computer Vision. Seoul, 2019. 5644–5652

    Google Scholar 

  12. Chang Y, Yan L, Zhong S. Transformed low-rank model for line pattern noise removal. In: Proceedings of the IEEE International Conference on Computer Vision. Venice, 2017. 1726–1734

    Google Scholar 

  13. Tanaka Y. Removal of adherent waterdrops from images acquired with a stereo camera system. IEICE Trans Inf Syst, 2006, 89: 2021–2027

    Article  Google Scholar 

  14. Hara T, Saito H, Kanade T. Removal of glare caused by water droplets. In: Proceedings of the IEEE Conference for Visual Media Production. London, 2009. 144–151

    Google Scholar 

  15. You S, Tan R T, Kawakami R, et al. Adherent Raindrop Modeling, Detectionand Removal in Video. IEEE Trans Pattern Anal Mach Intell, 2016, 38: 1721–1733

    Article  Google Scholar 

  16. Garg K, Nayar S K. Detection and removal of rain from videos. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, 2004. 325–332

    Google Scholar 

  17. Kurihata H, Takahashi T, Ide I, et al. Rainy weather recognition from in-vehicle camera images for driver assistance. In: Proceedings of the IEEE Intelligent Vehicles Symposium. Las Vegas, 2005. 205–210

    Google Scholar 

  18. Roser M, Kurz J, Geiger A. Realistic modeling of water droplets for monocular adherent raindrop recognition using Bézier curves. In: Proceedings of the Asian Conference on Computer Vision. Queenstown, 2011. 235–244

    Google Scholar 

  19. Simonyan K, Zisserman A. Very deep convolutional networks for largescale image recognition. In: Proceedings of the International Conference on Learning Representations. 2015. 325–332

    Google Scholar 

  20. Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial networks. Commun ACM, 2020, 63: 139–144

    Article  Google Scholar 

  21. Badrinarayanan V, Kendall A, Cipolla R. SegNet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans Pattern Anal Mach Intell, 2017, 39: 2481–2495

    Article  Google Scholar 

  22. Eigen D, Krishnan D, Fergus R. Restoring an image taken through a window covered with dirt or rain. In: Proceedings of the IEEE International Conference on Computer Vision. Sydney, 2013. 633–640

    Google Scholar 

  23. Isola P, Zhu J Y, Zhou T, et al. Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Honolulu, 2017. 1125–1134

    Google Scholar 

  24. Wang G, Sun C, Sowmya A. Attentive feature refinement network for single rainy image restoration. IEEE Trans Image Process, 2021, 30: 3734–3747

    Article  Google Scholar 

  25. Wang G, Sun C, Sowmya A. Cascaded attention guidance network for single rainy image restoration. IEEE Trans Image Process, 2020, 29: 9190–9203

    Article  Google Scholar 

  26. X Liu, Suganuma M, Sun Z, et al. Dual residual networks leveraging the potential of paired operations for image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, 2019. 7007–7016

    Google Scholar 

  27. Iizuka S, Simo-Serra E, Ishikawa H. Globally and locally consistent image completion. ACM Trans Graph, 2017, 36: 1–14

    Article  Google Scholar 

  28. Kang L W, Lin C W, Fu Y H. Automatic single-image-based rain streaks removal via image decomposition. IEEE Trans Image Process, 2012, 21: 1742–1755

    Article  MathSciNet  Google Scholar 

  29. Chen Y, Hsu C T. Removing rain from a single image via discriminative sparse coding. In: Proceedings of the IEEE International Conference on Computer Vision. Santiago, 2015. 3397–3405

    Google Scholar 

  30. Roser M, Geiger A. Video-ased raindrop detection for improved image registration. In: Proceedings of the IEEE International Conference on Computer Vision Workshops. Kyoto, 2009. 570–577

    Google Scholar 

  31. Yu J, Lin Z, Yang J, et al. Generative image inpainting with contextual attention. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, 2018. 5505–5514

    Google Scholar 

  32. Pathak D, Krahenbuhl P, Donahue J, et al. Context encoders: Feature learning by inpainting. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Las Vegas, 2016. 2536–2544

    Google Scholar 

  33. Mao X, Li Q, Xie H, et al. Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision Workshops. Venice, 2017. 2794–2802

    Google Scholar 

  34. Xu D, Ouyang W, Wang X, et al. PAD-Net: Multi-tasks guided prediction-and-distillation network for simultaneous depth estimation and scene parsing. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, 2018. 675–684

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yang Yang.

Additional information

This work was supported by the Joint Funds of the National Natural Science Foundation of China (Grant No. U20B2063), the Sichuan Science and Technology Program (Grant No. 2020YFS0057), and the Fundamental Research Funds for the Central Universities (Grant No. ZYGX2019Z015).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, G., Yang, Y., Xu, X. et al. Enhanced context encoding for single image raindrop removal. Sci. China Technol. Sci. 64, 2640–2650 (2021). https://doi.org/10.1007/s11431-021-1914-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11431-021-1914-8

Keywords

Navigation