Skip to main content
Log in

ZRDNet: zero-reference image defogging by physics-based decomposition–reconstruction mechanism and perception fusion

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

This paper investigates challenging fully unsupervised defogging problems, i.e., how to remove fog by feeding only foggy images in deep neural networks rather than using paired or unpaired synthetic images, and how to overcome the problems of insufficient structure and detail recovery in existing unsupervised defogging methods. For this purpose, a zero-reference image defogging method (ZRDNet) is proposed to solve these two problems. Specifically, we develop an unsupervised defogging network consisting of a layer decomposition network and a perceptual fusion network, which are separately optimized by joint multiple-loss based on the stage-wise learning. The decomposition network guides the image decomposition–reconstruction process by rationally constructing loss functions. The fusion network further enhances the details and contrast of the defogged images by fusing the decomposition–reconstruction results. The joint multiple-loss optimization strategy based on the stage-wise learning guides decomposition and fusion tasks, which are completed stage-by-stage. Additionally, a non-reference loss is constructed to prevent artifacts and distortion induced by transmission value deviation. Our method is completely unsupervised, and training only relies on fog images and information derived from the fog images themselves. Experiments are conducted to demonstrate that our ZRDNet, which overcomes the problems of insufficient structure and detail recovery, and domain shift induced by using synthetic image, achieves favorable performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Data availability

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

References

  1. Kuanar, S., Mahapatra, D., Bilas, M., et al.: Multi-path dilated convolution network for haze and glow removal in nighttime images. Vis. Comput. 38, 1121–1134 (2022)

    Article  Google Scholar 

  2. Narasimhan, S.G., Nayar, S.K.: Contrast restoration of weather degraded images. IEEE Trans. Pattern Anal. Mach. Intell. 25(6), 713–724 (2003)

    Article  Google Scholar 

  3. He, K., Sun, J., Tang, X.: Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011)

    Article  Google Scholar 

  4. Bui, T.M., Kim, W.: Single image dehazing using color ellipsoid prior. IEEE Trans. Image Process. 27(2), 999–1009 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  5. Zhang, X., Wang, T., Tang, G., et al.: Single image haze removal based on a simple additive model with haze smoothness prior. IEEE Trans. Circuits Syst. Video Technol. 32(6), 3490–3499 (2022)

    Article  Google Scholar 

  6. Zhu, Q., Mai, J., Shao, L.: A fast single image haze removal algorithm using color attenuation prior. IEEE Trans. Image Process. 24(11), 3522–3533 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  7. Zhang, S., Zhang, J., He, F., et al.: DRDDN: dense residual and dilated dehazing network. Vis. Comput. 39(3), 953–969 (2023)

    Article  Google Scholar 

  8. Yi, W., Dong, L., Liu, M., et al.: MFAF-Net: image dehazing with multi-level features and adaptive fusion. Vis. Comput. (2023). https://doi.org/10.1007/s00371-023-02917-8

    Article  Google Scholar 

  9. Song, Y., He, Z., Qian, H., et al.: Vision transformers for single image dehazing. IEEE Trans. Image Process. 32, 1927–1941 (2023)

    Article  Google Scholar 

  10. Yang, D., Sun, J.: Proximal dehaze-net: a prior learning-based deep network for single image dehazing. In: European Conference on Computer Vision, Munich, Germany, pp. 702–717 (2018)

  11. Qin, X., Wang, Z., Bai, Y., et al: FFA-Net: feature fusion attention network for single image dehazing. In: AAAI Conference on Artificial Intelligence, New York, USA, pp. 11908–11915 (2020)

  12. Chen, Z., Wang, Y., Yang, Y., et al.: PSD: principled synthetic-to-real dehazing guided by physical priors. In: IEEE Conference on Computer Vision and Pattern Recognition, Nashville, USA, pp. 7176–7185 (2021)

  13. Yang, F., Zhang, Q.: Depth aware image dehazing. Vis. Comput. 38(5), 1579–1587 (2022)

    Article  Google Scholar 

  14. Li, X., Hua, Z., Li, J.: Attention-based adaptive feature selection for multi-stage image dehazing. Vis. Comput. 39(2), 663–678 (2023)

    Article  Google Scholar 

  15. Engin, D., Genc, A., Ekenel, H.K.: Cycle-Dehaze: enhanced CycleGAN for single image dehazing. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, USA, pp. 938–9388 (2018)

  16. Yang, X., Xu, Z., Luo, J.: Towards perceptual image dehazing by physics-based disentanglement and adversarial training. In: AAAI Conference on Artificial Intelligence, New Orleans, USA, pp. 7485–7492 (2018)

  17. Wang, S., Mei, X., Kang, P., et al.: DFC-dehaze: an improved cycle-consistent generative adversarial network for unpaired image dehazing. Vis. Comput. (2023). https://doi.org/10.1007/s00371-023-02987-8

    Article  Google Scholar 

  18. Li, L., Dong, Y.L., Ren, W.Q., et al.: Semi-supervised image dehazing. IEEE Trans. Image Process. 29, 2766–2779 (2020)

    Article  MATH  Google Scholar 

  19. Zhao, S., Zhang, L., Shen, Y., et al.: RefineDNet: a weakly supervised refinement framework for single image dehazing. IEEE Trans. Image Process. 30, 3391–3404 (2021)

    Article  Google Scholar 

  20. Yang, Y., Wang, C., Liu, R., et al.: Self-augmented unpaired image dehazing via density and depth decomposition. In: IEEE Conference on Computer Vision and Pattern Recognition, New Orleans, USA, pp. 2037–2046 (2022)

  21. Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, Montreal, Canada, pp. 2672–2680 (2014)

  22. Jiang, Q., Mao, Y., Cong, R., et al.: Unsupervised decomposition and correction network for low-light image enhancement. IEEE Trans. Intell. Transp. Syst. 23(10), 19440–19455 (2022)

    Article  Google Scholar 

  23. Guo, C., Li, C., Guo, J., et al.: Zero-reference deep curve estimation for low-light image enhancement. In: IEEE Conference on Computer Vision and Pattern Recognition, Seattle, USA, pp. 1780–1789 (2020)

  24. Gandelsman, Y., Shocher, A., Irani, M.: Double-DIP: unsupervised image decomposition via coupled deep-image-priors. In: IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, USA, pp. 11026–11035 (2019)

  25. Golts, A., Freedman, D., Elad, M.: Unsupervised single image dehazing using dark channel prior loss. IEEE Trans. Image Process. 29, 2692–2701 (2020)

    Article  MATH  Google Scholar 

  26. Li, B., Gou, Y., Liu, J.Z., et al.: Zero-shot image dehazing. IEEE Trans. Image Process. 29, 8457–8466 (2020)

    Article  MATH  Google Scholar 

  27. Li, B., Gou, Y., Gu, S., et al.: You only look yourself: unsupervised and untrained single image dehazing neural network. Int. J. Comput. Vis. 129(5), 1754–1767 (2021)

    Article  Google Scholar 

  28. Xu, W., Chen, X., Guo, H., et al.: Unsupervised image restoration with quality-task-perception loss. IEEE Trans. Circuits Syst. Video Technol. 32(9), 5736–5747 (2022)

    Article  Google Scholar 

  29. Li, J., Li, Y., Zhuo, L., et al.: USID-Net: unsupervised single image dehazing network via disentangled representations. IEEE Trans. Multimed. (2022). https://doi.org/10.1109/TMM.2022.3163554

    Article  Google Scholar 

  30. Li, C., Guo, C., Guo, J., et al.: PDR-Net: perception-inspired single image dehazing network with refinement. IEEE Trans. Multimed. 22(3), 704–716 (2020)

    Article  Google Scholar 

  31. Li, R., Pan, J., Li, Z., et al.: Single image dehazing via conditional generative adversarial network. In: IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, pp. 8202–8211 (2018)

  32. Zhang, H., Patel, V.M.: Densely connected pyramid dehazing network. In: IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, pp. 3194–3203 (2018)

  33. Woo, S., Park, J., Lee, J.Y., et al.: CBAM: convolutional block attention module. In: European Conference on Computer Vision, Munich, Germany, pp. 3–19 (2018)

  34. Yin, S., Wang, Y., Yang, Y.H.: A novel image-dehazing network with a parallel attention block. Pattern Recogn. 102, 107255 (2020)

    Article  Google Scholar 

  35. He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, pp. 770–778 (2016)

  36. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, pp. 234–241 (2015)

  37. Zhao, H., Kong, X., He, J., et al.: Efficient image super-resolution using pixel attention. In: European Conference on Computer Vision, Glasgow, UK, pp. 56–72 (2020)

  38. Li, X., Wang, W., Hu, X., et al.: Selective kernel networks. In: IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, USA, pp. 510–519 (2019)

  39. Johnson, J., Alahi, A., Li, F.: Perceptual losses for real-time style transfer and super-resolution. In: European Conference on Computer Vision, Amsterdam, Netherlands, pp. 694–711 (2016)

  40. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. Preprint arXiv:1409.1556v6 (2015). https://doi.org/10.48550/arXiv.1409.1556

  41. Russakovsky, O., Deng, J., Su, H., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015)

    Article  MathSciNet  Google Scholar 

  42. Li, Z., Wang, Y., Peng, C., et al.: Laplace dark channel attenuation-based single image defogging in ocean scenes. Multimed. Tools Appl. (2022). https://doi.org/10.1007/s11042-022-14103-4

    Article  Google Scholar 

  43. Li, B., Ren, W., Fu, D., et al.: Benchmarking single-image dehazing and beyond. IEEE Trans. Image Process. 28(1), 492–505 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  44. Zhao, S., Zhang, L., Huang, S., et al.: Dehazing evaluation: real-world benchmark datasets, criteria, and baselines. IEEE Trans. Image Process. 29, 6947–6962 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  45. Huynh-Thu, Q., Ghanbari, M.: Scope of validity of PSNR in image/video quality assessment. Electron. Lett. 44(13), 800–801 (2008)

    Article  Google Scholar 

  46. Wang, Z., Bovik, A.C., Sheikh, H.R., et al.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  47. Su, S., Yan, Q., Zhu, Y., et al.: Blindly assess image quality in the wild guided by a self-adaptive hyper network. In: IEEE Conference on Computer Vision and Pattern Recognition, Seattle, USA, pp. 3667–3676 (2020)

  48. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: International Conference for Learning Representations, San Diego, USA, pp. 1–8 (2015)

  49. Ledig, C., Theis, L., Huszár, F., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, pp. 4681-4690 (2017)

  50. Blau, Y., Mechrez, R., Timofte, R., et al.: The 2018 PIRM challenge on perceptual image super-resolution. In: European Conference on Computer Vision Workshops, Munich, Germany, pp. 334–355 (2018)

Download references

Acknowledgements

This work was supported in part by the National Science Foundation of China (Grant Nos. 52371372, 61833011); and the Project of Science and Technology Commission of Shanghai Municipality, China (Grant Nos. 20ZR1420200, 21SQBS01600, 22JC1401400, 19510750300, 21190780300).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yu-Long Wang.

Ethics declarations

Conflict of interest

The authors whose names are listed declare no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, ZX., Wang, YL., Han, QL. et al. ZRDNet: zero-reference image defogging by physics-based decomposition–reconstruction mechanism and perception fusion. Vis Comput (2023). https://doi.org/10.1007/s00371-023-03109-0

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00371-023-03109-0

Keywords

Navigation