Skip to main content

Low-Light Image Enhancement Under Non-uniform Dark

  • Conference paper
  • First Online:
MultiMedia Modeling (MMM 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13834))

Included in the following conference series:

  • 1309 Accesses

Abstract

The low visibility of low-light images due to lack of exposure poses a significant challenge for vision tasks such as image fusion, detection and segmentation in low-light conditions. Real-world situations such as backlighting and shadow occlusion mostly exist with non-uniform low-light, while existing enhancement methods tend to brighten both low-light and normal-light regions, we actually prefer to enhance dark regions but suppress overexposed regions. To address this problem, we propose a new non-uniform dark visual network (NDVN) that uses the attention mechanism to enhance regions with different levels of illumination separately. Since deep-learning needs strong data-driven, for this purpose we carefully construct a non-uniform dark synthetic dataset (UDL) that is larger and more diverse than existing datasets, and more importantly it contains more non-uniform light states. We use the manually annotated luminance domain mask (LD-mask) in the dataset to drive the network to distinguish between low-light and extremely dark regions in the image. Guided by the LD-mask and the attention mechanism, the NDVN adaptively illuminates different light regions while enhancing the color and contrast of the image. More importantly, we introduce a new region loss function to constrain the network, resulting in better quality enhancement results. Extensive experiments show that our proposed network outperforms other state-of-the-art methods both qualitatively and quantitatively.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Wei, C., Wang, W., Yang, W., Liu, J.: Deep retinex decomposition for low-light enhancement. In: British Machine Vision Conference, 155p (2018)

    Google Scholar 

  2. Cai, J., Gu, S., Zhang, L.: Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans. Image Process. 27(4), 2049–2062 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  3. Bychkovsky, V., Paris, S., Chan, E., Durand, F.: Learning photographic global tonal adjustment with a database of input/output image pairs. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 97–104(2011)

    Google Scholar 

  4. Chen, C., Chen, Q., Xu, J., Koltun, V.: Learning to see in the dark. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3291–3300 (2018)

    Google Scholar 

  5. Zhao, H., Gallo, O., Frosio, I., Kautz, J.: Loss functions for image restoration with neural networks. IEEE Trans.Comput. Imaging 3(1), 47–57 (2017)

    Article  Google Scholar 

  6. Hahner, M., Dai, D., Sakaridis, C., Zaech, N.J., Gool, V.L.: Semantic understanding of foggy scenes with purely synthetic data. In: IEEE Intelligent Transportation Systems Conference, pp. 3675–3681 (2019)

    Google Scholar 

  7. Xu, K., Yang, X., Yin, B., Lau, H.W.R.: Learning to restore low-light images via decomposition-and-enhancement. In: Conference on Computer Vision and Pattern Recognition, pp. 2278–2287(2020)

    Google Scholar 

  8. Yang, W., Wang, S., Wang, Y: Band representation-based semi-supervised low-light image enhancement: bridging the gap between signal fidelity and perceptual quality. IEEE Trans. Image Process. 30, 3461–3473 (2021)

    Google Scholar 

  9. Ibrahim, H., Kong, N.: Brightness preserving dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electron. 53(4), 1752–1758 (2007)

    Article  Google Scholar 

  10. Abdullah-Al-Wadud, M., Kabir, H.M., Akber Dewan, A.M., Chae, O.: A dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electr. 53(2), 593–600(2007)

    Google Scholar 

  11. Jobson, D.J., Rahman, Z., Woodell, A.G.: A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 6(7), 965–976 (2002)

    Article  Google Scholar 

  12. Gu, Z., Li, F., Fang, F., Zhang, G.: A Novel retinex-based fractional-order variational model for images with severely low light. IEEE Trans. Image Process. 24, 3239–3253 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  13. Hu, X., Jiang, Y., Fu, W.C., Heng, A.P.: Mask-ShadowGAN: learning to remove shadows from unpaired data. In: International Conference on Computer Vision, pp. 2278–2287(2019)

    Google Scholar 

  14. Zhou, S., Li C., and Loy C.C.: LEDNet: Joint Low-light Enhancement and Deblurring in the Dark. European Conference on Computer Vision, (2022). https://doi.org/10.1007/978-3-031-20068-7_33

  15. Ibrahim, H., Kong, N.: Brightness preserving dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electron. 53(4), 1752–1758 (2007)

    Article  Google Scholar 

  16. Nakai, K., Hoshi, Y., Taguchi, A.: Color image contrast enhacement method based on differential intensity/saturation gray-levels histograms. In: International Symposium on Intelligent Signal Processing and Communications Systems, pp. 445–449 (2013)

    Google Scholar 

  17. Sheikh, H.R., Bovik, A.C.: Image information and visual quality. IEEE Trans. Image Process. 15(2), 430–444 (2006)

    Article  Google Scholar 

  18. Chen, Z., Abidi, B.R., Page, D.L., Abidi, M.A.: Graylevel grouping (GLG): an automatic method for optimized image contrast enhancement-Part I: the basic method. IEEE Trans. Image Process. 15(8), 2290–2302 (2006)

    Article  Google Scholar 

  19. Jiang, Y.: EnlightenGAN: deep light enhancement without paired supervision. IEEE Trans. Image Process. 30, 2340–2349 (2021)

    Article  Google Scholar 

  20. Lv, F., Li, Y., Lu, F.: Attention guided low-light image enhancement with a large scale low-light simulation dataset. Int. J. Comput. Vision 129(4), 2175–2193 (2019)

    MathSciNet  Google Scholar 

  21. Guo, X., Li, Y., Ling, H.: LIME: low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 26(2), 982–993 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  22. Wang, Z.: Image quality assessment : from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  23. Li, M., Liu, J., Yang, W., Sun, X., Guo, Z.: Structure-revealing low-light image enhancement via robust retinex model. IEEE Trans. Image Process. 27(6), 2828–2841 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  24. Lore, K.G., Akintayo, A., Sarkar, S.: LLNet: a deep autoencoder approach to natural low-light image enhancement. Pattern Recogn. 61, 650–662 (2017)

    Article  Google Scholar 

  25. Lv, F., Li, Y., Wu, J., Lim, C.: MBLLEN: low-light image/video enhancement using CNNs. In: British Machine Vision Conference, 220p (2018)

    Google Scholar 

  26. Li, X., Wang, W., Hu, X.: Selective Kernel networks. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 510–519 (2019)

    Google Scholar 

  27. Fan, C.M., Liu T.J., Liu K.H.: Half wavelet attention on M-Net+ for L image enhancement. In: International Conference on Image Processing (2022)

    Google Scholar 

  28. Kwon, D., Kim G., and Liu J. Kwon.: DALE: dark region-aware low-light image enhancement. In: British Machine Vision Conference, 1025p. (2020)

    Google Scholar 

  29. Zhang, Y., Zhang, J., Guo, X.: Kindling the darkness: a practical low-light image enhancer. In: International Conference on Multimedia 1(9), 1632–1640 (2019)

    Google Scholar 

  30. Zhang, Y., Guo, X., Ma, J., Liu, W., Zhang, J.: Beyond brightening low-light images. Int. J. Comput. Vision 129(4), 1013–1037 (2021). https://doi.org/10.1007/s11263-020-01407-x

    Article  Google Scholar 

  31. Liu, J., Xu, D., Yang, W., Fan, M., Huang, H.: Benchmarking low-light image enhancement and beyond. Int. J. Comput. Vision 129(4), 1153–1184 (2021). https://doi.org/10.1007/s11263-020-01418-8

    Article  Google Scholar 

  32. Guo C., Li C., Guo J., Loy C. C., Hou J., and Cong R.: Zero-reference deep curve estimation for low-light image enhancement. In: Conference on Computer Vision and Pattern Recognition, pp. 1780–1789 (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Yuhang Li or Youdong Ding .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, Y., Cai, F., Tu, Y., Ding, Y. (2023). Low-Light Image Enhancement Under Non-uniform Dark. In: Dang-Nguyen, DT., et al. MultiMedia Modeling. MMM 2023. Lecture Notes in Computer Science, vol 13834. Springer, Cham. https://doi.org/10.1007/978-3-031-27818-1_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-27818-1_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-27817-4

  • Online ISBN: 978-3-031-27818-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics