Skip to main content
Log in

Dual-task complementary networks for single-image deraining

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Single-image rain removal is an extremely challenging task as it requires not only removing rain streaks with complex shapes, scales, and opacities but also recovering spatial details and high-level contextual structures of the underlying image. Although deep learning networks have achieved encouraging performance, current research mainly focuses on building deeper and more complex network architectures to recover reliable detailed textures or utilizing multi-scale encoder-decoder structures to learn semantic contexts in larger receptive fields, they are still not sufficient to balance rain streak removal and detail preservation. In this study, we propose a novel end-to-end network, called a dual-task complementary network (DTCN), composed of a detail recovery progressive network (DRPN) and a multifeature fusion encoder-decoder network (MEDN), to balance rain streak removal and detail preservation. Specifically, DRPN is designed to recover details in the original image, while MEDN is used to remove structural rain streaks. In addition, to reconstruct more natural and clear images, we integrate multiple network training losses, including structural similarity loss, perceptual contrast loss, perceptual image similarity loss, and edge loss. Experimental results demonstrate that our method outperforms several state-of-the-art methods on both synthetic and real rainy images. The code will be uploaded to https://github.com/zhang152267/DTCN.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Data availability

The datasets generated during and/or analyzed during the current study are available from the corresponding author upon reasonable request.

References

  1. He, T., Zhang, Z., Zhang, H., et al. Bag of tricks for image classification with convolutional neural networks. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 558–567 (2019)

  2. Zhu, Y., Zhuang, F., Wang, J., et al.: Deep subdomain adaptation network for image classification. IEEE Trans. Neural Net. Learn. Syst. 32(4), 1713–1722 (2020)

    Article  MathSciNet  Google Scholar 

  3. Padilla, R., Netto, S. L., Da Silva, E. A. B. A survey on performance metrics for object-detection algorithms. 2020 international conference on systems, signals and image processing (IWSSIP). IEEE, 237–242 (2020)

  4. Xie, X., Cheng. G., Wang, J., et al. Oriented R-CNN for object detection. Proceedings of the IEEE/CVF international conference on computer vision. 3520–3529 (2021)

  5. Sreenu, G., Durai, S.: Intelligent video surveillance: a review through deep learning techniques for crowd analysis. J. Big Data 6(1), 1–27 (2019)

    Article  Google Scholar 

  6. Elhoseny, M.: Multi-object detection and tracking (MODT) machine learning model for real-time video surveillance systems. Circuits Syst. Signal Process. 39, 611–630 (2020)

    Article  Google Scholar 

  7. Mo, Y., Wu, Y., Yang, X., et al.: Review the state-of-the-art technologies of semantic segmentation based on deep learning. Neurocomputing 493, 626–646 (2022)

    Article  Google Scholar 

  8. Chen, X., Yuan, Y., Zeng, G., et al. Semi-supervised semantic segmentation with cross pseudo supervision. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2613–2622 (2021)

  9. Kang, L.W., Lin, C.W., Fu, Y.H.: Automatic single-image-based rain streaks removal via image decomposition. IEEE Trans. Image Process. 21(4), 1742–1755 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  10. Li, Y., Tan, R. T., Guo, X., et al. Rain streak removal using layer priors. Proceedings of the IEEE conference on computer vision and pattern recognition. 2736–2744 (2016)

  11. Xu, Y., Ji, Y. H. Removing rain from a single image via discriminative sparse coding. Proceedings of the IEEE international conference on computer vision. 3397–3405 (2015)

  12. Yang, W., Tan, R.T., Wang, S., et al.: Single image deraining: from model-based to data-driven and beyond. IEEE Trans. Pattern Anal. Mach. Intell. 43(11), 4059–4077 (2020)

    Article  Google Scholar 

  13. Fu, X., Huang, J., Zeng, D., et al. Removing rain from single images via a deep detail network. Proceedings of the IEEE conference on computer vision and pattern recognition. 3855–3863 (2017)

  14. Yang, W., Tan, R. T., Feng, J., et al. Deep joint rain detection and removal from a single image. Proceedings of the IEEE conference on computer vision and pattern recognition. 1357–1366 (2017)

  15. Li, X., Wu, J., Lin, Z., et al. Recurrent squeeze-and-excitation context aggregation net for single image deraining. Proceedings of the European conference on computer vision (ECCV). 254–269 (2018)

  16. Ren, D., Zuo, W., Hu, Q., et al. Progressive image deraining networks: a better and simpler baseline. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 3937–3946 (2019)

  17. Szegedy, C., Ioffe, S., Vanhoucke, V., et al. Inception-v4, inception-resnet and the impact of residual connections on learning. Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17) (2017)

  18. Jiang, K., Wang, Z., Yi, P., et al.: Rain-free and residue hand-in-hand: a progressive coupled network for real-time image deraining. IEEE Trans. Image Process. 30, 7404–7418 (2021)

    Article  Google Scholar 

  19. Jiang, K., Wang, Z., Yi, P., et al. Multi-scale progressive fusion network for single image deraining. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 8346–8355 (2020)

  20. Zamir, S. W., Arora, A., Khan, S., et al. Multi-stage progressive image restoration. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 14821–14831 (2021)

  21. Zhang, X., Wang, T., Luo, W., et al.: Multi-level fusion and attention-guided CNN for image dehazing. IEEE Trans. Circuits Syst. Video Technol. 31(11), 4162–4173 (2020)

    Article  Google Scholar 

  22. Zhang, K., Luo, W., Yu, Y., et al.: Beyond monocular deraining: parallel stereo deraining network via semantic prior. Int. J. Comput. Vision 130(7), 1754–1769 (2022)

    Article  Google Scholar 

  23. Zheng, S., Lu, C., Wu, Y., et al. SAPNet: segmentation-aware progressive network for perceptual contrastive deraining. Proceedings of the IEEE/CVF winter conference on applications of computer vision. 52–62 (2022)

  24. Wang, Z., Li, J., Song, G. Dtdn: dual-task de-raining network. Proceedings of the 27th ACM international conference on multimedia. 1833–1841 (2019)

  25. Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al.: Generative adversarial networks. Commun. ACM 63(11), 139–144 (2020)

    Article  MathSciNet  Google Scholar 

  26. Qian, R., Tan, R. T., Yang, W., et al. Attentive generative adversarial network for raindrop removal from a single image. Proceedings of the IEEE conference on computer vision and pattern recognition. 2482–2491 (2018)

  27. Woo, S., Park, J., Lee, J. Y., et al. Cbam: convolutional block attention module. Proceedings of the European conference on computer vision (ECCV). 3–19 (2018)

  28. Paszke, A., Gross, S., Massa, F., et al. Pytorch: an imperative style, high-performance deep learning library. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada (2019)

  29. Wang, T., Yang, X., Xu, K., et al. Spatial attentive single-image deraining with a high quality real rain dataset. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 12270–12279 (2019)

  30. Wei, W., Meng, D., Zhao, Q., et al. Semi-supervised transfer learning for image rain removal. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 3877–3886 (2019)

  31. Pedregosa, F., Varoquaux, G., Gramfort, A., et al.: Scikit-learn: machine learning in python. J. Mach. Learn. Res. 12, 2825–2830 (2011)

    MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Contributions

Heng Zhang wrote the main manuscript text and performed the related experiments. Dongli Jia gave guidance. All authors reviewed and revised the manuscript.

Corresponding author

Correspondence to Dongli Jia.

Ethics declarations

Conflict of interest

The authors did not receive support from any organization for the submitted work. The authors have no relevant financial or nonfinancial interests to disclose. Author Heng Zhang declares that he has no conflict of interest. Author Dongli Jia declares that he has no conflict of interest. Author Zixian Han declares that he has no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, H., Jia, D. & Han, Z. Dual-task complementary networks for single-image deraining. SIViP 17, 4171–4179 (2023). https://doi.org/10.1007/s11760-023-02649-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-023-02649-1

Keywords

Navigation