Skip to main content
Log in

Robust Unpaired Image Dehazing via Density and Depth Decomposition

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

To overcome the overfitting issue of dehazing models trained on synthetic hazy-clean image pairs, recent methods attempt to boost the generalization ability by training on unpaired data. However, most of existing approaches simply resort to formulating dehazing–rehazing cycles with generative adversarial networks, yet ignore the physical property in the real-world hazy environment, i.e., the haze effect varies along with density and depth. This paper proposes a robust self-augmented image dehazing framework for haze generation and removal. Instead of merely estimating transmission maps or clean content, the proposed scheme focuses on exploring the scattering coefficient and depth information of hazy and clean images. Having the scene depth estimated, our method is capable of re-rendering hazy images with different thicknesses, which benefits the training of the dehazing network. Besides, a dual contrastive perceptual loss is introduced to further improve the quality of both dehazed and rehazed images. Comprehensive experiments are conducted to reveal the advance of our method over other state-of-the-art unpaired dehazing methods in terms of visual quality, model size, and computational cost. Moreover, our model can be robustly trained on, not only synthetic indoor datasets, but also real outdoor scenes with remarkable improvement on the real-world image dehazing. Our code and training data are available at: https://github.com/YaN9-Y/D4_plus.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

References

  • Ancuti, C., Ancuti, C. O., Timofte, R., De Vleeschouwer, C. (2018) I-haze: A dehazing benchmark with real hazy and haze-free indoor images. In International conference on advanced concepts for intelligent vision systems (pp. 620–631). Springer.

  • Berman, D., et al. (2016). Non-local image dehazing. In IEEE transactions on pattern analysis and machine intelligence (pp. 1674–1682).

  • Cai, B., Xu, X., Jia, K., Qing, C., & Tao, D. (2016). Dehazenet: An end-to-end system for single image haze removal. IEEE Transactions on Image Processing, 25(11), 5187–5198.

    Article  MathSciNet  Google Scholar 

  • Chang, M., Li, Q., Feng, H., & Xu, Z.(2020). Spatial-adaptive network for single image denoising. In European conference on computer vision (pp. 171–187). Springer.

  • Chen, L.C., & Zhu, Y., Papandreou, G., Schroff, F., Adam, H.(2018). Encoder–decoder with atrous separable convolution for semantic image segmentation. In European conference on computer vision (pp. 801–818).

  • Chen, T., Kornblith, S., Norouzi, M., & Hinton, G.(2020). A simple framework for contrastive learning of visual representations. In International conference on machine learning (pp. 1597–1607). PMLR.

  • Chen, Z., Wang, Y., Yang, Y., & Liu, D.(2021). Psd: Principled synthetic-to-real dehazing guided by physical priors. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7180–7189).

  • Choi, L. K., You, J., & Bovik, A. C. (2015). Referenceless prediction of perceptual fog density and perceptual image defogging. IEEE Transactions on Image Processing, 24(11), 3888–3901.

    Article  MathSciNet  Google Scholar 

  • Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., et al. (2016). The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3213–3223).

  • Deng, Q., Huang, Z., Tsai, C. C., & Lin, C. W. (2020). Hardgan: A haze-aware representation distillation GAN for single image dehazing. In European conference on computer vision (pp. 722–738). Springer.

  • Dong, H., Pan, J., Xiang, L., Hu, Z., Zhang, X., Wang, F., et al. (2020). Multi-scale boosted dehazing network with dense feature fusion. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2157–2167).

  • Dudhane, A., & Murala, S.(2019). Cdnet: Single image de-hazing using unpaired adversarial training. In WACV (pp. 1147–1155).

  • Engin, D., Genç, A., & Kemal Ekenel, H.(2018). Cycle-dehaze: Enhanced cyclegan for single image dehazing. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops (pp. 825–833).

  • Fattal, R. (2014). Dehazing using color-lines. ACM Transaction on Graphics, 34(1), 1–14.

    Article  Google Scholar 

  • Geiger, A., Lenz, P., & Urtasun, R. (2012). Are we ready for autonomous driving? The kitti vision benchmark suite. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3354–3361). IEEE.

  • Golts, A., Freedman, D., & Elad, M. (2019). Unsupervised single image dehazing using dark channel prior loss. IEEE Transactions on Image Processing, 29, 2692–2701.

    Article  Google Scholar 

  • Guo, X., Yang, Y., Wang, C., & Ma, J. (2022). Image dehazing via enhancement, restoration, and fusion: A survey. Information Fusion, 86–87, 146–170.

    Article  Google Scholar 

  • Han, J., Shoeiby, M., Malthus, T., Botha, E., Anstee, J., Anwar, S., et al. (2021). Single underwater image restoration by contrastive learning. In IEEE international geoscience and remote sensing symposium (IGARSS) (pp. 2385–2388). IEEE.

  • He, F., Liu, T., & Tao, D. (2020). Why resnet works? Residuals generalize. IEEE Transactions on Neural Networks and Learning Systems, 31(12), 5349–5362.

    Article  MathSciNet  Google Scholar 

  • He, F., & Tao, D.(2020). Recent advances in deep learning theory. arXiv preprint arXiv:2012.10931

  • He, K., Fan, H., Wu, Y., Xie, S., & Girshick, R.(2020). Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 9729–9738).

  • He, K., Sun, J., & Tang, X. (2010). Single image haze removal using dark channel prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(12), 2341–2353.

    Google Scholar 

  • He, K., Sun, J., & Tang, X. (2012). Guided image filtering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(6), 1397–1409.

    Article  Google Scholar 

  • He, T., Zhang, Z., Zhang, H., Zhang, Z., Xie, J., & Li, M.(2019). Bag of tricks for image classification with convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 558–567).

  • Hu, Q., Zhang, Y., Zhu, Y., Jiang, Y., & Song, M. (2023). Single image dehazing algorithm based on sky segmentation and optimal transmission maps. The Visual Computer, 39(3), 997–1013.

    Article  Google Scholar 

  • Huang, S. C., Le, T. H., & Jaw, D. W. (2020). Dsnet: Joint semantic learning for object detection in inclement weather conditions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(8), 2623–2633.

    Google Scholar 

  • Isola, P., Zhu, J. Y., Zhou, T., & Efros, A.A.(2017). Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5967–5976).

  • Jin, Y., Gao, G., Liu, Q., & Wang, Y.(2020). Unsupervised conditional disentangle network for image dehazing. In IEEE international conference on image processing (pp. 963–967). IEEE.

  • Kingma, D. P., & Ba, J.(2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980

  • Laina, I., Rupprecht, C., Belagiannis, V., Tombari, F., & Navab, N.(2016). Deeper depth prediction with fully convolutional residual networks. In International conference on 3D vision (pp. 239–248).

  • Li, B., Gou, Y., Gu, S., Liu, J. Z., Zhou, J. T., & Peng, X. (2021). You only look yourself: Unsupervised and untrained single image dehazing neural network. International Journal of Computer Vision, 129(5), 1754–1767.

    Article  Google Scholar 

  • Li, B., Gou, Y., Liu, J. Z., Zhu, H., Zhou, J. T., & Peng, X. (2020). Zero-shot image dehazing. IEEE Transactions on Image Processing, 29, 8457–8466.

    Article  Google Scholar 

  • Li, B., Peng, X., Wang, Z., Xu, J., & Feng, D.(2017). Aod-net: All-in-one dehazing network. In Proceedings of the IEEE international conference on computer vision (pp. 4770–4778).

  • Li, B., Ren, W., Fu, D., Tao, D., Feng, D., Zeng, W., & Wang, Z. (2019). Benchmarking single-image dehazing and beyond. IEEE Transactions on Image Processing, 28(1), 492–505.

    Article  MathSciNet  Google Scholar 

  • Li, R., Pan, J., Li, Z., & Tang, J.(2018). Single image dehazing via conditional generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8202–8211).

  • Li, X., Wu, J., Lin, Z., Liu, H., & Zha, H. (2018). Recurrent squeeze-and-excitation context aggregation net for single image deraining. In European conference on computer vision, pp. 254–269.

  • Li, Z., Wang, C., Zheng, H., Zhang, J., & Li, B.(2022). Fakeclr: Exploring contrastive learning for solving latent discontinuity in data-efficient gans. arXiv preprint arXiv:2207.08630

  • Liu, W., Hou, X., Duan, J., & Qiu, G. (2020). End-to-end single image fog removal using enhanced cycle consistent adversarial networks. IEEE Transactions on Image Processing, 29, 7819–7833.

    Article  Google Scholar 

  • Liu, X., Ma, Y., Shi, Z., & Chen, J.(2019) Griddehazenet: Attention-based multi-scale network for image dehazing. In Proceedings of the IEEE international conference on computer vision (pp. 7314–7323).

  • Liu, Y., Li, H., & Wang, M. (2017). Single image dehazing via large sky region segmentation and multiscale opening dark channel model. IEEE Access, 5, 8890–8903.

    Article  Google Scholar 

  • Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., & Paul Smolley, S.(2017). Least squares generative adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2794–2802).

  • Narasimhan, S. G., & Nayar, S. K. (2000). Chromatic framework for vision in bad weather. In Proceedings of the IEEE conference on computer vision and pattern recognition (Vol. 1, pp. 598–605).

  • Narasimhan, S. G., & Nayar, S. K. (2002). Vision and the atmosphere. International Journal of Computer Vision, 48(3), 233–254.

    Article  Google Scholar 

  • Nathan Silberman Derek Hoiem, P. K., & Fergus, R. (2012). Indoor segmentation and support inference from RGBD images. In European conference on computer vision.

  • Oord, A.v.d., Li, Y., & Vinyals, O. (2018). Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748

  • Park, T., Efros, A. A., Zhang, R., & Zhu, J. Y. (2020). Contrastive learning for unpaired image-to-image translation. In European conference on computer vision (pp. 319–345). Springer.

  • Qin, X., Wang, Z., Bai, Y., Xie, X., & Jia, H.(2020). Ffa-net: Feature fusion attention network for single image dehazing. In AAAI conference on artificial intelligence (Vol. 34, pp. 11908–11915).

  • Qu, Y., Chen, Y., Huang, J., & Xie, Y. (2019). Enhanced pix2pix dehazing network. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8160–8168).

  • Ranftl, R., Lasinger, K., Hafner, D., Schindler, K., & Koltun, V. (2020). Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(3), 1623–1637.

    Article  Google Scholar 

  • Redmon, J., & Farhadi, A. (2018) Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767

  • Ren, D., Zuo, W., Hu, Q., Zhu, P., & Meng, D.(2019). Progressive image deraining networks: A better and simpler baseline. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3937–3946).

  • Ren, W., Liu, S., Zhang, H., Pan, J., Cao, X., & Yang, M.H.(2016) Single image dehazing via multi-scale convolutional neural networks. In European conference on computer vision (pp. 154–169).

  • Ronneberger, O., Fischer, P., & Brox, T.(2015). U-net: Convolutional networks for biomedical image segmentation. In International conference on medical image computing and computer assisted intervention (pp. 234–241).

  • Sakaridis, C., Dai, D., Hecker, S., & Van Gool, L. (2018). Model adaptation with synthetic and real data for semantic dense foggy scene understanding. In ECCV (pp. 707–724).

  • Sakaridis, C., Dai, D., & Van Gool, L. (2018). Semantic foggy scene understanding with synthetic data. International Journal of Computer Vision, 126(9), 973–992.

    Article  Google Scholar 

  • Salazar-Colores, S., Moya-Sanchez, E. U., Ramos-Arreguin, J. M., Cabal-Yepez, E., Flores, G., & Cortes, U. (2020). Fast single image defogging with robust sky detection. IEEE Access, 8, 149176–149189.

    Article  Google Scholar 

  • Shao, Y., Li, L., Ren, W., Gao, C., & Sang, N.(2020). Domain adaptation for image dehazing. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2808–2817).

  • Sharma, G., Wu, W., & Dalal, E. N. (2005). The ciede2000 color-difference formula: Implementation notes, supplementary test data, and mathematical observations. Color Research & Application, 30(1), 21–30.

    Article  Google Scholar 

  • Shen, Y., Deng, S., Yang, W., Wei, M., Xie, H., Zhang, X., Qin, J., & Wang, M. (2022). Semi-DRDNet semi-supervised detail-recovery image deraining network via unpaired contrastive learning. arXiv preprint arXiv:2204.02772

  • Shyam, P., Yoon, K.J., & Kim, K.S.(2021). Towards domain invariant single image dehazing. In AAAI conference on artificial intelligence (Vol. 35, pp. 9657–9665).

  • Simonyan, K., & Zisserman, A. (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556

  • Tan, M., & Le, Q.(2019). Efficientnet: Rethinking model scaling for convolutional neural networks. In International conference on machine learning (pp. 6105–6114).

  • Wang, W., Yuan, X., Wu, X., & Liu, Y. (2017). Dehazing for images with large sky region. Neurocomputing, 238, 365–376.

    Article  Google Scholar 

  • Wang, Y., Yan, X., Wang, F.L., Xie, H., Yang, W., Wei, M., & Qin, J. (2022). Ucl-dehaze: Towards real-world image dehazing via unsupervised contrastive learning. arXiv preprint arXiv:2205.01871

  • Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612.

    Article  Google Scholar 

  • Wei, P., Wang, X., Wang, L., & Xiang, J.(2021). Sidgan: Single image dehazing without paired supervision. In International conference on pattern recognition (pp. 2958–2965). IEEE.

  • Wu, H., Qu, Y., Lin, S., Zhou, J., Qiao, R., Zhang, Z., Xie, Y., & Ma, L.(2021). Contrastive learning for compact single image dehazing. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 10551–10560).

  • Xian, K., Shen, C., Cao, Z., Lu, H., Xiao, Y., Li, R., & Luo, Z.(2018). Monocular relative depth perception with web stereo data supervision. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 311–320).

  • Yang, F., Yang, H., Fu, J., Lu, H., & Guo, B.(2020). Learning texture transformer network for image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5791–5800).

  • Yang, X., Xu, Z., & Luo, J.(2018). Towards perceptual image dehazing by physics-based disentanglement and adversarial training. In AAAI conference on artificial intelligence (Vol. 32, pp. 7485–7492).

  • Yang, Y., Wang, C., Liu, R., Zhang, L., Guo, X., & Tao, D.(2022). Self-augmented unpaired image dehazing via density and depth decomposition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2037–2046).

  • Zhang, J., Lu, S., Zhan, F., & Yu, Y.(2021). Blind image super-resolution via contrastive representation learning. arXiv preprint arXiv:2107.00708

  • Zhang, K., Zuo, W., Chen, Y., Meng, D., & Zhang, L. (2017). Beyond a gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Transactions on Image Processing, 26(7), 3142–3155.

  • Zhang, Y., Tian, Y., Kong, Y., Zhong, B., & Fu, Y. (2018). Residual dense network for image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2472–2481).

  • Zhao, J., Zhang, J., Li, Z., Hwang, J. N., Gao, Y., Fang, Z., Jiang, X., & Huang, B. (2019). Dd-cyclegan: Unpaired image dehazing via double-discriminator cycle-consistent generative adversarial network. Engineering Applications of Artificial Intelligence, 82, 263–271.

    Article  Google Scholar 

  • Zhao, S., Zhang, L., Shen, Y., & Zhou, Y. (2021). Refinednet: A weakly supervised refinement framework for single image dehazing. IEEE Transactions on Image Processing, 30, 3391–3404.

    Article  Google Scholar 

  • Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223–2232).

  • Zhu, Q., Mai, J., & Shao, L. (2015). A fast single image haze removal algorithm using color attenuation prior. IEEE Transactions on Image Processing, 24(11), 3522–3533.

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China under Grant No. 62072327.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaojie Guo.

Additional information

Communicated by Jian Sun.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, Y., Wang, C., Guo, X. et al. Robust Unpaired Image Dehazing via Density and Depth Decomposition. Int J Comput Vis 132, 1557–1577 (2024). https://doi.org/10.1007/s11263-023-01940-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-023-01940-5

Keywords

Navigation