Advertisement

Nighttime Defogging Using High-Low Frequency Decomposition and Grayscale-Color Networks

Conference paper
  • 1k Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12357)

Abstract

We address the problem of nighttime defogging from a single image by introducing a framework consisting of two modules: grayscale and color modules. Given an RGB foggy nighttime image, our grayscale module takes the grayscale version of the image as input, and decomposes it into high and low frequency layers. The high frequency layers contain the scene texture information, which is less affected by fog. While the low frequency layers contain the scene layout/structure information including fog and glow. Our grayscale module then enhances the visibility of the textures in the high frequency layers, and removes the presence of glow and fog in the low frequency layers. Having processed the high/low frequency information, it fuses the two layers to obtain a grayscale defogged image. Our second module, the color module, takes the original RGB image, and process it similarly to what the grayscale module does. However, to obtain fog-free high and low frequency information, the module is guided by the grayscale module. The reason of doing this is because grayscale images are less affected by multiple colors of atmospheric light, which are commonly present in nighttime scenes. Moreover, having the grayscale module allows us to have consistency losses between the outputs of the two modules, which is critical to our framework, since we do not have paired ground-truths for our real data. Our extensive experiments on real foggy nighttime images show the effectiveness of our method.

Notes

Acknowledgment

This work is supported by MOE2019-T2-1-130.

References

  1. 1.
    Ancuti, C., Ancuti, C.O., De Vleeschouwer, C., Bovik, A.C.: Night-time dehazing by fusion. In: 2016 IEEE International Conference on Image Processing (ICIP), pp. 2256–2260. IEEE (2016)Google Scholar
  2. 2.
    Barbosa, W.V., Amaral, H.G., Rocha, T.L., Nascimento, E.R.: Visual-quality-driven learning for underwater vision enhancement. In: 2018 25th IEEE International Conference on Image Processing (ICIP), pp. 3933–3937. IEEE (2018)Google Scholar
  3. 3.
    Berman, D., Avidan, S., et al.: Non-local image dehazing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1674–1682 (2016)Google Scholar
  4. 4.
    Chen, C., Chen, Q., Xu, J., Koltun, V.: Learning to see in the dark. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3291–3300 (2018)Google Scholar
  5. 5.
    Doan, A.D., Jawaid, A.M., Do, T.T., Chin, T.J.: G2D: from GTA to Data. arXiv preprint arXiv:1806.07381, pp. 1–9 (2018)
  6. 6.
    Engin, D., Genc, A., Kemal Ekenel, H.: Cycle-dehaze: enhanced cyclegan for single image dehazing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 825–833 (2018)Google Scholar
  7. 7.
    Fattal, R.: Single image dehazing. ACM Trans. Graph. (TOG) 27(3), 72 (2008)CrossRefGoogle Scholar
  8. 8.
    Fattal, R.: Dehazing using color-lines, 34(1), 13 (2014)Google Scholar
  9. 9.
    Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)Google Scholar
  10. 10.
    Guo, X., Li, Y., Ling, H.: Lime: low-light image enhancement via illumination map estimation. IEEE Trans. Image Process 26(2), 982–993 (2016)MathSciNetCrossRefGoogle Scholar
  11. 11.
    He, K., Sun, J., Tang, X.: Guided image filtering. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6311, pp. 1–14. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15549-9_1CrossRefGoogle Scholar
  12. 12.
    He, K., Sun, J., Tang, X.: Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011)CrossRefGoogle Scholar
  13. 13.
    Jiang, Y., Sun, C., Zhao, Y., Yang, L.: Image dehazing using adaptive bi-channel priors on superpixels. Comput. Vis. Image Underst. 165, 17–32 (2017)CrossRefGoogle Scholar
  14. 14.
    Li, B., Peng, X., Wang, Z., Xu, J., Feng, D.: Aod-net: All-in-one dehazing network. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4770–4778 (2017)Google Scholar
  15. 15.
    Li, R., Pan, J., Li, Z., Tang, J.: Single image dehazing via conditional generative adversarial network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8202–8211 (2018)Google Scholar
  16. 16.
    Li, Yu., Guo, F., Tan, R.T., Brown, M.S.: A contrast enhancement framework with JPEG artifacts suppression. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8690, pp. 174–188. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10605-2_12CrossRefGoogle Scholar
  17. 17.
    Li, Y., Tan, R.T., Brown, M.S.: Nighttime haze removal with glow and multiple light colors. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 226–234 (2015)Google Scholar
  18. 18.
    Li, Y., You, S., Brown, M.S., Tan, R.T.: Haze visibility enhancement: a survey and quantitative benchmarking. Comput. Vis. Image Underst. 165, 1–16 (2017)CrossRefGoogle Scholar
  19. 19.
    Meng, G., Wang, Y., Duan, J., Xiang, S., Pan, C.: Efficient image dehazing with boundary constraint and contextual regularization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 617–624 (2013)Google Scholar
  20. 20.
    Pei, S.C., Lee, T.Y.: Nighttime haze removal using color transfer pre-processing and dark channel prior. In: 2012 19th IEEE International Conference on Image Processing, pp. 957–960. IEEE (2012)Google Scholar
  21. 21.
    Qu, Y., Chen, Y., Huang, J., Xie, Y.: Enhanced pix2pix dehazing network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8160–8168 (2019)Google Scholar
  22. 22.
    Ren, W., Liu, S., Zhang, H., Pan, J., Cao, X., Yang, M.-H.: Single image dehazing via multi-scale convolutional neural networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 154–169. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46475-6_10CrossRefGoogle Scholar
  23. 23.
    Ren, W., et al.: Gated fusion network for single image dehazing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3253–3261 (2018)Google Scholar
  24. 24.
    Tan, R.T.: Visibility in bad weather from a single image. In: 2008 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8. IEEE (2008)Google Scholar
  25. 25.
    Zhang, H., Patel, V.M.: Densely connected pyramid dehazing network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3194–3203 (2018)Google Scholar
  26. 26.
    Zhang, J., Cao, Y., Fang, S., Kang, Y., Wen Chen, C.: Fast haze removal for nighttime image using maximum reflectance prior. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7418–7426 (2017)Google Scholar
  27. 27.
    Zhang, J., Cao, Y., Wang, Z.: Nighttime haze removal based on a new imaging model, pp. 4557–4561. IEEE (2014)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.National University of SingaporeSingaporeSingapore
  2. 2.Yale-NUS CollegeSingaporeSingapore
  3. 3.ETH ZurichZürichSwitzerland

Personalised recommendations