Skip to main content
Log in

A Comprehensive Benchmark Analysis of Single Image Deraining: Current Challenges and Future Perspectives

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

The capability of image deraining is a highly desirable component of intelligent decision-making in autonomous driving and outdoor surveillance systems. Image deraining aims to restore the clean scene from the degraded image captured in a rainy day. Although numerous single image deraining algorithms have been recently proposed, these algorithms are mainly evaluated using certain type of synthetic images, assuming a specific rain model, plus a few real images. It remains unclear how these algorithms would perform on rainy images acquired “in the wild” and how we could gauge the progress in the field. This paper aims to bridge this gap. We present a comprehensive study and evaluation of existing single image deraining algorithms, using a new large-scale benchmark consisting of both synthetic and real-world rainy images of various rain types. This dataset highlights diverse rain models (rain streak, rain drop, rain and mist), as well as a rich variety of evaluation criteria (full- and no-reference objective, subjective, and task-specific). We further provide a comprehensive suite of criteria for deraining algorithm evaluation, including full- and no-reference metrics, subjective evaluation, and the novel task-driven evaluation. The proposed benchmark is accompanied with extensive experimental results that facilitate the assessment of the state-of-the-arts on a quantitative basis. Our evaluation and analysis indicate the gap between the achievable performance on synthetic rainy images and the practical demand on real-world images. We show that, despite many advances, image deraining is still a largely open problem. The paper is concluded by summarizing our general observations, identifying open research challenges and pointing out future directions. Our code and dataset is publicly available at http://uee.me/ddQsw.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. Note that for Rain drop (T), the data generation used physical simulation (Qian et al. 2018), i.e., with/without lens, rather than algorithm simulation.

  2. Note that in Liu et al. (2014) and Saad et al. (2012), a smaller SSEQ/BLIINDS-II score indicates better perceptual quality. We reverse the two scores (100 minus) to make their trends look consistent to full-reference metrics: in our tables the bigger the two values, the better the perceptual quality. We did not do the same to NIQE, because NIQE has no bounded maximum value.

  3. We did not include GMM for the two sets, because (1) it did not yield promising results when we tried to apply it to (part of) the two sets; (2) it runs very slow, given we have two large sets.

References

  • Barnum, P. C., Narasimhan, S., & Kanade, T. (2010). Analysis of rain and snow in frequency space. International Journal of Computer Vision, 86(2–3), 256.

    Article  Google Scholar 

  • Bossu, J., Hautière, N., & Tarel, J.-P. (2011). Rain or snow detection in image sequences through use of a histogram of orientation of streaks. International Journal of Computer Vision, 93(3), 348–367.

    Article  Google Scholar 

  • Bradley, R. A., & Terry, M. E. (1952). Rank analysis of incomplete block designs: I. The method of paired comparisons. Biometrika, 39(3/4), 324–345.

    Article  MathSciNet  Google Scholar 

  • Chen, W., Yu, Z., Wang, Z. & Anandkumar, A. (2020). Automated synthetic-to-real generalization. arXiv preprintarXiv:2007.06965.

  • Chen, Y., Li, W., Sakaridis, C., Dai, D., & Van Gool, L. (2018). Domain adaptive faster r-cnn for object detection in the wild. In IEEE conference on computer vision and pattern recognition (pp. 3339–3348).

  • Chen, Y.-L. & Hsu, C.-T. (2013). A generalized low-rank appearance model for spatio-temporally correlated rain streaks. In IEEE international conference on computer vision (pp. 1968–1975).

  • Choi, L. K., You, J., & Bovik, A. C. (2015). Referenceless prediction of perceptual fog density and perceptual image defogging. IEEE Transactions on Image Processing, 24(11), 3888–3901.

    Article  MathSciNet  Google Scholar 

  • Dai, D., Sakaridis, C., Hecker, S., Gool, V., & Luc,. (2020). Curriculum model adaptation with synthetic and real data for semantic foggy scene understanding. International Journal of Computer Vision, 128(5), 1182–1204.

  • Dai, D., Wang, Y., Chen, Y., & Van Gool, L. (2016). Is image super-resolution helpful for other vision tasks? In Winter conference on applications of computer vision. IEEE.

  • Ding, X., Chen, L., Zheng, X., Huang, Y., & Zeng, D. (2016). Single image rain and snow removal via guided l0 smoothing filter. Multimedia Tools and Applications, 75(5), 2697–2712.

    Article  Google Scholar 

  • Eigen, D., Krishnan, D., & Krishnan, R. (2013). Restoring an image taken through a window covered with dirt or rain. In IEEE international conference on computer vision.

  • Fu, X., Huang, J., Ding, X., Liao, Y., & Paisley, J. (2017a). Clearing the skies: A deep network architecture for single-image rain removal. IEEE Transactions on Image Processing, 26(6), 2944–2956.

    Article  MathSciNet  Google Scholar 

  • Fu, X., Huang, J., Zeng, D., Huang, Y., Ding, X., & Paisley, J. (2017b). Removing rain from single images via a deep detail network. In IEEE conference on computer vision and pattern recognition.

  • Fu, X., Liang, B., Huang, Y., Ding, X., & Paisley, J. W. (2020). Lightweight pyramid networks for image deraining. IEEE Transactions on Neural Networks and Learning Systems, 31(6), 1794–1807.

    Article  Google Scholar 

  • Garg, K. & Nayar, S. K. (2004). Detection and removal of rain from videos. In IEEE conference on computer vision and pattern recognition.

  • Garg, K., & Nayar, S. K. (2005). When does a camera see rain? In IEEE International conference on computer vision.

  • Gu, S., Meng, D., Zuo, W., & Zhang, L. (2017) Joint convolutional analysis and synthesis sparse representation for single image layer separation. In IEEE international conference on computer vision (pp. 1717–1725).

  • Hahner, M., Dai, D., Sakaridis, C., Zaech, J.-N., & Van Gool, L. (2019). Semantic understanding of foggy scenes with purely synthetic data. In Intelligent transportation systems conference (pp. 3675–3681).

  • Halder, S. S., Lalonde, J.-F., & de Charette, R. (2019). Physics-based rendering for improving robustness to rain. In IEEE international conference on computer vision (pp. 10203–10212).

  • Hu, X., Fu, C.-W., Zhu, L. & Heng, P.-A. (2019). Depth-attentional features for single-image rain removal. In IEEE conference on computer vision and pattern recognition (pp. 8022–8031).

  • Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., & Wang, Y. (2017). A novel tensor-based video rain streaks removal approach via utilizing discriminatively intrinsic priors. In IEEE conference on computer vision and pattern recognition.

  • Jiang, Y., Gong, X., Liu, D., Cheng, Y., Fang, C., Shen, X., Yang, J., Zhou, P., & Wang, Z. (2019). Enlightengan: Deep light enhancement without paired supervision. arXiv preprintarXiv:1906.06972.

  • Jin, X., Chen, Z., Lin, J., Chen, Z., & Zhou, W. (2019). Unsupervised single image deraining with self-supervised constraints. In IEEE international conference on image processing (pp. 2761–2765).

  • Kang, L.-W., Lin, C.-W., & Fu, Y.-H. (2012). Automatic single-image-based rain streaks removal via image decomposition. IEEE Transactions on Image Processing, 21(4), 1742.

    Article  MathSciNet  Google Scholar 

  • Katrin, H., Ole, J., Daniel, K., & Bastian, G. (2016). A dataset and evaluation methodology for depth estimation on 4d light fields. In Asian conference on computer vision (pp. 19–34).

  • Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D., & Matas, J. (2018). Deblurgan: Blind motion deblurring using conditional adversarial networks. In IEEE conference on computer vision and pattern recognition (pp. 8183–8192).

  • Kupyn, O., Martyniuk, T., Wu, J., & Wang, Z. (2019). Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better. In Proceedings of the IEEE international conference on computer vision (pp. 8878–8887).

  • Lai, W.-S., Huang, J.-B., Hu, Z., Ahuja, N., & Yang, M.-H. (2016). A comparative study for single image blind deblurring. In IEEE conference on computer vision and pattern recognition (pp. 1701–1709).

  • Lei, Z., Fu, C.-W., Dani, L., & Heng, P.-A. (2017). Joint bilayer optimization for single-image rain streak removal. In IEEE international conference on computer vision.

  • Li, B., Peng, X., Wang, Z., Xu, J. & Feng, D. (2017). Aod-net: All-in-one dehazing network. In IEEE international conference on computer vision (pp. 4770–4778).

  • Li, B., Peng, X., Wang, Z., Xu, J., & Feng, D. (2018). End-to-end united video dehazing and detection. In AAAI conference on artificial intelligence.

  • Li, B., Ren, W., Fu, D., Tao, D., Feng, D., Zeng, W., et al. (2019a). Benchmarking single-image dehazing and beyond. IEEE Transactions on Image Processing, 28(1), 492–505.

    Article  MathSciNet  Google Scholar 

  • Li, R., Cheong, L.-F. & Tan, R. T. (2019b). Heavy rain image restoration: Integrating physics model and conditional adversarial learning. In IEEE conference on computer vision and pattern recognition (pp. 1633–1642).

  • Li, S., Iago Araujo, B., Ren, W., Wang, Z., Tokuda, E. K., Junior, R. H., Cesar-Junior, R., Zhang, J., Guo, X., & Cao, X. (2019c). Single image deraining: A comprehensive benchmark analysis. In IEEE conference on computer vision and pattern recognition (pp. 3838–3847).

  • Li, Y., Tan, R. T., Guo, X., Lu, J., & Brown, M. S. (2016). Rain streak removal using layer priors. In IEEE conference on computer vision and pattern recognition (pp. 2736–2744).

  • Li, Y., Tan, R. T., Guo, X., Lu, J., & Brown, M. S. (2017). Single image rain streak decomposition using layer priors. IEEE Transactions on Image Processing, 26(8), 3874–3885.

    Article  MathSciNet  Google Scholar 

  • Lin, T.-Y., Goyal, P., Girshick, R., He, K., & Dollár, P. (2018). Focal loss for dense object detection. IEEE transactions on pattern analysis and machine intelligence.

  • Lin, T.-Y., Maire, M., Belongie, S. J., Hays, J., Perona, P. Ramanan, D., Piotr. Dollár, C., & Zitnick, L. (2014). Microsoft coco: Common objects in context. In European conference on computer vision (pp. 740–755).

  • Liu, D., Cheng, B., Wang, Z., Zhang, H., & Huang, T. S. (2019). Enhance visual recognition under adverse conditions via deep networks. IEEE Transactions on Image Processing, 28(9), 4401–4412.

    Article  MathSciNet  Google Scholar 

  • Liu, D., Wen, B., Jiao, J., Liu, X., Wang, Z., & Huang, T. S. (2020). Connecting image denoising and high-level vision tasks via deep learning. IEEE Transactions on Image Processing, 29, 3695–3706.

    Article  Google Scholar 

  • Liu, D., Wen, B., Liu, X., Wang, Z., & Huang, T. S. (2018). When image denoising meets high-level vision tasks: A deep learning approach. In International joint conference on artificial intelligence (pp. 842–848).

  • Liu, F., Shen, C., Lin, G., & Reid, I. (2016). Learning depth from single monocular images using deep convolutional neural fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(10), 2024–2039.

    Article  Google Scholar 

  • Liu, L., Liu, B., Huang, H., & Bovik, A. C. (2014). No-reference image quality assessment based on spatial and spectral entropies. Signal Processing: Image Communication, 29(8), 856–863.

    Google Scholar 

  • Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., Berg, A. C. (2016). Ssd: Single shot multibox detector. In European conference on computer vision (pp. 21–37).

  • Liu, Y., Zhao, G., Gong, B., Li, Y., Raj, R., Goel, N., Kesav, S., Gottimukkala, S., Wang, Z., Ren, W., et al. (2018). Improved techniques for learning to dehaze and beyond: A collective study. arXiv preprintarXiv:1807.00202.

  • Luo, Y., Xu, Y., & Ji, H. (2015). Removing rain from a single image via discriminative sparse coding. In IEEE international conference on computer vision.

  • McCartney, E. J. (1976). Optics of the atmosphere: Scattering by molecules and particles (p. 421). New York: Wiley.

    Google Scholar 

  • Mittal, A., Soundararajan, R., & Bovik, A. C. (2012). Making a “completely blind” image quality analyzer. IEEE Signal Processing Letters, 20(3), 209–212.

    Article  Google Scholar 

  • Mittal, A., Soundararajan, R., & Bovik, A. C. (2013). Making a “completely blind” image quality analyzer. IEEE Signal Processing Letters, 20(3), 209–212.

    Article  Google Scholar 

  • Pei, Y., Huang, Y., Zou, Q., Lu, Y., & Wang, S. (2018). Does haze removal help cnn-based image classification? arXiv:1810.05716.

  • Qian, R., Tan, R. T., Yang, W., Su, J., & Liu, J. (2018). Attentive generative adversarial network for raindrop removal from a single image. In IEEE conference on computer vision and pattern recognition.

  • Redmon, J. & Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv:1804.02767.

  • Ren, D., Zuo, W., Hu, Q., Zhu, P., & Meng, D. (2019). Progressive image deraining networks: A better and simpler baseline. In IEEE conference on computer vision and pattern recognition (pp. 3937–3946).

  • Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems (pp. 91–99).

  • Ren, W., Liu, S., Zhang, H., Pan, J., Cao, X., & Yang, M.-H. (2016). Single image dehazing via multi-scale convolutional neural networks. In European conference on computer vision.

  • Ren, W., Ma, L., Zhang, J., Pan, J., Cao, X., Liu, W., & Yang, M.-H. (2018b). Gated fusion network for single image dehazing. In IEEE conference on computer vision and pattern recognition (pp. 3253–3261).

  • Ren, W., Pan, J., Zhang, H., Cao, X., & Yang, M.-H. (2020). Single image dehazing via multi-scale convolutional neural networks with holistic edges. International Journal of Computer Vision, 128(1), 240–259.

    Article  Google Scholar 

  • Ren, W., Tian, J., Han, Z., Chan, A., & Tang, Y. (2017). Video desnowing and deraining based on matrix decomposition. In IEEE conference on computer vision and pattern recognition.

  • Ren, W., Zhang, J., Xiangyu, X., Ma, L., Cao, X., Meng, G., et al. (2018a). Deep video dehazing with semantic segmentation. IEEE Transactions on Image Processing, 28(4), 1895–1908.

    Article  MathSciNet  Google Scholar 

  • Saad, M. A., Bovik, A. C., & Charrier, C. (2012). Blind image quality assessment: A natural scene statistics approach in the dct domain. IEEE Transactions on Image Processing, 21(8), 3339–3352.

    Article  MathSciNet  Google Scholar 

  • Sakaridis, C., Dai, D., Gool, V., & Luc,. (2018). Semantic foggy scene understanding with synthetic data. International Journal of Computer Vision, 126(9), 973–992.

  • Santhaseelan, V., & Asari, V. K. (2015). Utilizing local phase information to remove rain from video. International Journal of Computer Vision, 112(1), 71–89.

    Article  Google Scholar 

  • Scharstein, D., & Szeliski, R. (2002). A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision, 47(1–3), 7–42.

    Article  Google Scholar 

  • Scheirer, W., VidalMata, R., Banerjee, S., RichardWebster, B., Albright, M, Davalos, P., McCloskey, S., Miller, B., Tambo, A., Ghosh, S., et al. (2020). Bridging the gap between computational photography and visual recognition. In IEEE transactions on pattern analysis and machine intelligence.

  • Schops, T., Schonberger, J. L., Galliani, S., Sattler, T., Schindler, K., Pollefeys, M. & Geiger, A. (2017). A multi-view stereo benchmark with high-resolution images and multi-camera videos. In IEEE conference on computer vision and pattern recognition (pp. 3260–3269).

  • Sheng, H., Zheng, Y., Ke, W., Dongxiao, Yu., Cheng, X., Lv, W., et al. (2020). Mining hard samples globally and efficiently for person re-identification. IEEE Internet of Things Journal, 7(10), 9611–9622.

    Article  Google Scholar 

  • Sun, S.-H., Fan, S.-P., & Wang, Y.-C. F. (2014). Exploiting image structural similarity for single image rain removal. In IEEE international conference on image processing (pp. 4482–4486).

  • Szeliski, R., Zabih, R., Scharstein, D., Veksler, O., Kolmogorov, V., Agarwala, A., et al. (2008). A comparative study of energy minimization methods for markov random fields with smoothness-based priors. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(6), 1068–1080.

    Article  Google Scholar 

  • Tokuda, E. K., Lockerman, Y., Ferreira, G. B. A., Sorrelgreen, E., Boyle, D., Cesar-Jr, R. M., et al. (2020). A new approach for pedestrian density estimation using moving sensors and computer vision. ACM Transactions on Spatial Algorithms and Systems (TSAS), 6(4), 1–20.

    Article  Google Scholar 

  • Wang, T., Yang, X., Xu, K., Chen, S., Zhang, Q., & Lau, R. W. H. (2019). Spatial attentive single-image deraining with a high quality real rain dataset. In IEEE conference on computer vision and pattern recognition.

  • Wang, Z., Chang, S., Yang, Y., Liu, D., & Huang, T. S. (2016). Studying very low resolution recognition using deep networks. In IEEE conference on computer vision and pattern recognition (pp. 4792–4800).

  • Wei, W., Meng, D., Zhao, Q., Xu, Z., Wu, Y. (2019). Semi-supervised transfer learning for image rain removal. In IEEE conference on computer vision and pattern recognition.

  • Xu Q., Wang, Z., Bai, Y., Xie, X., & Jia,H. (2020). Ffa-net: Feature fusion attention network for single image dehazing. In Conference on artificial intelligence.

  • Yang, W., Tan, R. T., Feng, J., Liu, J., Guo, Z., & Yan, S. (2016). Joint rain detection and removal via iterative region dependent multi-task learning. CoRR, abs/1609.07769, 2(3).

  • Yang, W., Tan, R. T., Feng, J., Liu, J., Guo, Z., & Yan, S.(2017). Deep joint rain detection and removal from a single image. In IEEE conference on computer vision and pattern recognition.

  • Yang, W., Yuan, Y., Ren, W., Liu, J., Scheirer, W. J., Wang, Z., et al. (2020). Advancing image understanding in poor visibility environments: A collective benchmark study. IEEE Transactions on Image Processing, 29, 5737–5752.

    Google Scholar 

  • Yasarla, R., Sindagi, V. A., & Patel, V. M. (2020). Syn2real transfer learning for image deraining using gaussian processes. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 2726–2736).

  • You, S., Tan, R. T., Kawakami, R., Mukaigawa, Y., & Ikeuchi, K. (2016). Adherent raindrop modeling, detection and removal in video. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(9), 1721–1733.

    Article  Google Scholar 

  • Yu, Y., Liu, Y., Zhang, H., Chen, S., & Qiao, Y. (2020). FD-GAN: Generative adversarial networks with fusion-discriminator for single image dehazing. In Conference on artificial intelligence.

  • Zhang, H. & Patel, V. M. (2018). Density-aware single image de-raining using a multi-stream dense network. In IEEE conference on computer vision and pattern recognition.

  • Zhang, H., Sindagi, V., & Patel, V. M. (2019). Image de-raining using a conditional generative adversarial network. In IEEE transactions on circuits and systems for video technology.

  • Zhang, K., Zuo, W., Gu, S., & Zhang, L. (2017) Learning deep CNN denoiser prior for image restoration. In IEEE conference on computer vision and pattern recognition.

  • Zheng, X., Liao, Y., Guo, W., Fu, X., & Ding, X. (2013). Single-image-based rain and snow removal using multi-guided filter. In International conference on neural information processing.

  • Zhou, X., Wang, D., Krähenbühl, P. (2019). Objects as points. arXiv:1904.07850.

  • Zhu, J.-Y., Park, T., Isola, P. & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint.

Download references

Acknowledgements

This work is supported by the Supported by the National Key R&D Program of China under Grant 2019YFB1406500, National Natural Science Foundation of China (Nos. 61802403, U1605252, U1736219), Beijing Education Committee Cooperation Beijing Natural Science Foundation (No. KZ201910005007), Beijing Nova Program (No. Z201100006820074), Beijing Natural Science Foundation (No. L182057), Peng Cheng Laboratory Project of Guangdong Province PCL2018KP004, Elite Scientist Sponsorship Program by the Beijing Association for Science and Technology, CAPES, CNPq, and the Funding Agency FAPESP (No. 15/22308-2).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wenqi Ren.

Additional information

Communicated by Torsten Sattler.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, S., Ren, W., Wang, F. et al. A Comprehensive Benchmark Analysis of Single Image Deraining: Current Challenges and Future Perspectives. Int J Comput Vis 129, 1301–1322 (2021). https://doi.org/10.1007/s11263-020-01416-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-020-01416-w

Keywords

Navigation