Skip to main content
Log in

When dual contrastive learning meets disentangled features for unpaired image deraining

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

As the basis work of image processing, rain removal from a single image has always been an important and challenging problem. Due to the lack of real rain images and corresponding clean images, most rain removal networks are trained by synthetic datasets, which makes the output images unsatisfactory in practical applications. In this work, we propose a new feature decoupling network for unsupervised image rain removal. Its purpose is to decompose the rain image into two distinguishable layers: clean image layer and rain layer. In order to fully decouple the features of different attributes, we use contrastive learning to constrain this process. Specifically, the image patch with similarity is pulled together as a positive sample, while the rain layer patch is pushed away as a negative sample. We not only make use of the inherent self-similarity within the sample, but also make use of the mutual exclusion between the two layers, so as to better distinguish the rain layer from the clean image. We implicitly constrain the embedding of different samples in the depth feature space to better promote rainline removal and image restoration. Our method achieves a PSNR of 25.80 on Test100, surpassing other unsupervised methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Availability of data and materials

Real world dataset can be downloaded at https://drive.google.com/drive/folders/1sSfm-HplPO3FLKR3wAH3iWBmYsD_UAy_?usp=sharing; Rain800 dataset can be downloaded at https://drive.google.com/drive/folders/0Bw2e6Q0nQQvGbi1xV1Yxd09rY2s; RainTrainL dataset can be downloaded at https://drive.google.com/file/d/1SPlNb19nmVCwLLdrzJnSrj-oJGGMMsxu/view.

References

  1. Luo, Y., Xu, Y., Ji, H.: Removing rain from a single image via discriminative sparse coding. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3397–3405 (2015). https://doi.org/10.1109/ICCV.2015.388

  2. Chang, Y., Yan, L., Zhong, S.: Transformed low-rank model for line pattern noise removal. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1726–1734 (2017). https://doi.org/10.1109/ICCV.2017.191

  3. Li, Y., Tan, R.T., Guo, X., Lu, J., Brown, M.S.: Rain streak removal using layer priors. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2736–2744 (2016). https://doi.org/10.1109/CVPR.2016.299

  4. Fu, X., Huang, J., Zeng, D., Huang, Y., Ding, X., Paisley, J.: Removing rain from single images via a deep detail network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3855–3863 (2017). https://doi.org/10.1109/CVPR.2017.186

  5. Wang, K., Wang, T., Qu, J., Jiang, H., Li, Q., Chang, L.: An end-to-end cascaded image deraining and object detection neural network. IEEE Robot. Autom. Lett. 7, 9541–9548 (2022). https://doi.org/10.1109/LRA.2022.3192200

    Article  Google Scholar 

  6. Li, X., Wu, J., Lin, Z., Liu, H., Zha, H.: Recurrent squeeze-and-excitation context aggregation net for single image deraining. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 254–269 (2018). https://doi.org/10.1007/978-3-030-01234-2_16

  7. Zhu, H., Peng, X., Zhou, J.T., Yang, S., Chanderasekh, V., Li, L., Lim, J.-H.: Singe image rain removal with unpaired information: a differentiable programming perspective. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9332–9339 (2019)

  8. Wei, Y., Zhang, Z., Wang, Y., Xu, M., Yang, Y., Yan, S., Wang, M.: Deraincyclegan: rain attentive cyclegan for single image deraining and rainmaking. IEEE Trans. Image Process. 30, 4788–4801 (2021). https://doi.org/10.1109/TIP.2021.3074804

    Article  Google Scholar 

  9. Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017). https://doi.org/10.1109/ICCV.2017.244

  10. Fu, X., Huang, J., Ding, X., Liao, Y., Paisley, J.: Clearing the skies: a deep network architecture for single-image rain removal. IEEE Trans. Image Process. 26(6), 2944–2956 (2017). https://doi.org/10.1109/TIP.2017.2691802

    Article  MathSciNet  MATH  Google Scholar 

  11. Ren, D., Zuo, W., Hu, Q., Zhu, P., Meng, D.: Progressive image deraining networks: a better and simpler baseline. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3937–3946 (2019). https://doi.org/10.1109/CVPR.2019.00406

  12. Jiang, K., Wang, Z., Yi, P., Chen, C., Huang, B., Luo, Y., Ma, J., Jiang, J.: Multi-scale progressive fusion network for single image deraining. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8346–8355 (2020). https://doi.org/10.1109/CVPR42600.2020.00837

  13. Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: efficient transformer for high-resolution image restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5728–5739 (2022). https://doi.org/10.48550/arXiv.2111.09881

  14. Park, T., Efros, A.A., Zhang, R., Zhu, J.-Y.: Contrastive learning for unpaired image-to-image translation. In: European Conference on Computer Vision, pp. 319–345 (2020). Springer

  15. Han, J., Shoeiby, M., Petersson, L., Armin, M.A.: Dual contrastive learning for unsupervised image-to-image translation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 746–755 (2021). https://doi.org/10.1109/CVPRW53098.2021.00084

  16. Wu, H., Qu, Y., Lin, S., Zhou, J., Qiao, R., Zhang, Z., Xie, Y., Ma, L.: Contrastive learning for compact single image dehazing. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10551–10560 (2021). https://doi.org/10.48550/arXiv.2104.09367

  17. Dong, N., Maggioni, M., Yang, Y., Pérez-Pellitero, E., Leonardis, A., McDonagh, S.: Residual contrastive learning for joint demosaicking and denoising. arXiv preprint arXiv:2106.10070 (2021). https://doi.org/10.48550/arXiv.2106.10070

  18. Zhang, J., Lu, S., Zhan, F., Yu, Y.: Blind image super-resolution via contrastive representation learning. arXiv preprint arXiv:2107.00708 (2021). https://doi.org/10.13140/RG.2.2.35795.30248

  19. Chen, X., Pan, J., Jiang, K., Li, Y., Huang, Y., Kong, C., Dai, L., Fan, Z.: Unpaired deep image deraining using dual contrastive learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2017–2026 (2022). https://doi.org/10.48550/arXiv.2109.02973

  20. Yuntong, Y., Changfeng, Y., Yi, C., Lin, Z., Xile, Z., Luxin, Y., Yonghong, T.: Unsupervised deraining: where contrastive learning meets self-similarity. arXiv preprint arXiv:2203.11509 (2022)

  21. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/CVPR.2016.90

  22. Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017)

  23. Zhang, H., Sindagi, V., Patel, V.M.: Image de-raining using a conditional generative adversarial network. IEEE Trans. Circuits Syst. Video Technol. 30(11), 3943–3956 (2019). https://doi.org/10.1109/TCSVT.2019.2920407

    Article  Google Scholar 

  24. Wang, C., Wu, Y., Su, Z., Chen, J.: Joint self-attention and scale-aggregation for self-calibrated deraining network. In: Proceedings of the 28th ACM International Conference on Multimedia, pp. 2517–2525 (2020)

  25. Yasarla, R., Sindagi, V., Patel, V.: Syn2real transfer learning for image deraining using gaussian processes. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2726–2736 (2020). https://doi.org/10.1109/CVPR42600.2020.00280

  26. Wei, Y., Zhang, Z., Wang, Y., Zhang, H., Zhao, M., Xu, M., Wang, M.: Semi-deraingan: a new semi-supervised single image deraining. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2021). https://doi.org/10.1109/ICME51207.2021.9428285

  27. Mittal, A., Moorthy, A.K., Bovik, A.C.: No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 21(12), 4695–4708 (2012). https://doi.org/10.1109/TIP.2012.2214050

    Article  MathSciNet  MATH  Google Scholar 

  28. Venkatanath, N., Praneeth, D., Bh, M., Channappayya, S., Medasani, S.: Blind image quality evaluation using perception based features. In: 2015 twenty first national conference on communications (NCC), pp. 1–6 (2015)

  29. Van der Maaten, L., Hinton, G.: Visualizing data using t-sne. Journal of machine learning research 9(11) (2008)

  30. Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: European Conference on Computer Vision, pp. 213–229 (2020). https://doi.org/10.1007/978-3-030-58452-8_13. Springer

  31. Chen, K., Wang, J., Pang, J., Cao, Y., Xiong, Y., Li, X., Sun, S., Feng, W., Liu, Z., Xu, J., et al.: Mmdetection: open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155 (2019). https://doi.org/10.48550/arXiv.1906.07155

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qing Li.

Ethics declarations

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Conflict of interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

Code availability

The custom code used to support the findings of this study is available from the corresponding author upon request.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, T., Wang, K. & Li, Q. When dual contrastive learning meets disentangled features for unpaired image deraining. Machine Vision and Applications 34, 73 (2023). https://doi.org/10.1007/s00138-023-01421-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00138-023-01421-2

Keywords

Navigation