Skip to main content
Log in

Efficient low-light image enhancement with model parameters scaled down to 0.02M

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

In the field of low-light image enhancement, existing deep learning methods face three significant challenges: inaccurate reflection component estimation, poor image enhancement capabilities, and high computational costs. This study introduces a novel, efficient solution to these problems in the form of an Ultra-Lightweight Enhancement Network (ULENet). Our primary contributions are twofold. First, we propose the combination of channel-wise context mining and spatial-wise reinforcement for improved low-light image enhancement. Second, we introduce a novel lightweight neural architecture, ULENet, designed specifically for this purpose. ULENet features two innovative subnetworks: the channel-wise context mining subnetwork for extracting rich context from low-light images, and the spatial-wise reinforcement subnetwork for extensive spatial feature extraction and detail reconstruction. We use the deep-learning framework PyTorch for training and evaluating our model. Extensive experiments demonstrate that ULENet significantly outperforms nine state-of-the-art low-light enhancement methods in terms of speed, accuracy, and adaptability in complex low-light scenarios. These results validate our initial hypothesis and underscore the effectiveness of the proposed approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Data availibility

The code of this study is available from the corresponding author on request.

References

  1. Li C, Guo C, Han L et al (2021) Low-light image and video enhancement using deep learning: a survey. IEEE Trans Pattern Anal Mach Intell 44(12):9396–9416

    Article  Google Scholar 

  2. Liu YF, Guo JM, Lai BS et al (2013) High efficient contrast enhancement using parametric approximation. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 2444–2448. IEEE

  3. Lee C, Lee C, Kim CS (2013) Contrast enhancement based on layered difference representation of 2D histograms. IEEE Trans Image Process 22(12):5372–5384

    Article  ADS  PubMed  Google Scholar 

  4. Wang Q, Ward RK (2007) Fast image/video contrast enhancement based on weighted thresholded histogram equalization. IEEE Trans Consum Electron 53(2):757–764

    Article  Google Scholar 

  5. Jobson DJ, Rahman Z, Woodell GA (1997) Properties and performance of a center/surround retinex. IEEE Trans Image Process 6(3):451–462. https://doi.org/10.1109/83.585790

    Article  ADS  CAS  PubMed  Google Scholar 

  6. Rahman Z, Jobson DJ, Woodell GA (1996) Multi-scale retinex for color image enhancement. In Proc. of 3rd IEEE international conf. on image processing. Vol. 3. IEEE

  7. Lee CH, Shih JL, Lien CC et al (2013) Adaptive multiscale retinex for image contrast enhancement. In 2013 International Conference on Signal-Image Technology and Internet-Based Systems, pp. 45–50, IEEE

  8. Wang S, Zheng J, Hu HM et al (2013) Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans Image Process 22(9):3538–3548

    Article  ADS  PubMed  Google Scholar 

  9. Fu X, Liao Y, Zeng D et al (2015) A probabilistic method for image enhancement with simultaneous illumination and reflectance estimation. IEEE Trans Image Process 24(12):4965–4977

    Article  ADS  MathSciNet  PubMed  Google Scholar 

  10. Tao L, Zhu C, Xiang G et al (2017) LLCNN: a convolutional neural network for low-light image enhancement. 2017 IEEE Visual Communications and Image Processing (VCIP). IEEE

  11. Lore KG, Akintayo A, Sarkar S et al (2017) LLNet: a deep autoencoder approach to natural low-light image enhancement. Pattern Recognit 61:650–662

    Article  ADS  Google Scholar 

  12. Lv F, Lu F, Wu J et al (2018) MBLLEN: Low-light image/video enhancement using CNNs. In BMVC 220(1)

  13. Cai J, Gu S, Zhang L (2018) Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans Image Process 27(4):2049–2062

    Article  ADS  MathSciNet  Google Scholar 

  14. Wei C, Wang W, Yang W et al (2018) Deep retinex decomposition for low-light enhancement. arXiv preprint arXiv:1808.04560

  15. Zhang Y, Zhang J, Guo X (2019) Kindling the darkness: A practical low-light image enhancer. In Proceedings of the 27th ACM international conference on multimedia., pp. 1632–1640

  16. Liu R, Ma L, Zhang J, et al. (2021) Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In Proc IEEE/CVF Conf Comput Vis Pattern Recognit, pp. 10561–10570

  17. Jain DK, Jacob S, Alzubi J, Menon V (2019) An efficient and adaptable multimedia system for converting PAL to VGA in real-time video processing. J Real-Time Image Proc. https://doi.org/10.1007/s11554-019-00889-4

    Article  Google Scholar 

  18. Hamdoun H, Nazir S, Alzubi JA, Laskot P, Alzubi OA (2020) Performance benefits of network coding for HEVC video communications in satellite networks. Iran J Electr Electron Eng 17:3

    Google Scholar 

  19. Al-Najdawi N, Tedmori S, Alzubi OA, Dorgham O, Alzubi JA (2016) A frequency based hierarchical fast search block matching algorithm for fast video video communications. Int J Adv Comput Sci Appl 7:4

    Google Scholar 

  20. Liu J, Xu D, Yang W et al (2021) Benchmarking low-light image enhancement and beyond. Int J Comput Vis 129(4):1153–1184

    Article  Google Scholar 

  21. Movassagh AA, Alzubi JA, Gheisari M, Rahimi M, Mohan SK, Abbasi AA, Nabipour N (2020) Artificial neural networks training algorithm integrating invasive weed optimization with differential evolutionary model. J Ambient Intell Hum Comput. https://doi.org/10.1007/s12652-020-02623-6

    Article  Google Scholar 

  22. Alzubi OA, Alzubi JA, Al-Zoubi AM, Hassonah MA, Kose U (2021) An efficient malware detection approach with feature weighting based on Harris Hawks optimization. Clust Comput. https://doi.org/10.1007/s10586-021-03459-1

    Article  Google Scholar 

  23. Celik T, Tjahjadi T (2011) Contextual and variational contrast enhancement. IEEE Trans Image Process 20(12):3431–3441

    Article  ADS  MathSciNet  PubMed  Google Scholar 

  24. Abdullah-Al-Wadud M et al (2007) A dynamic histogram equalization for image contrast enhancement. IEEE Trans Consum Electron 53(2):593–600

    Article  Google Scholar 

  25. Ibrahim H, Kong NSP (2007) Brightness preserving dynamic histogram equalization for image contrast enhancement. IEEE Trans Consum Electron 53(4):1752–1758

    Article  Google Scholar 

  26. Ying Z et al (2017) A new low-light image enhancement algorithm using camera response model. In Proc. IEEE/CVF Conf. Comput. Vis, Pattern Recognit workshops

  27. Ying Z, Li G, Gao W (2017) A bio-inspired multi-exposure fusion framework for low-light image enhancement. arXiv preprint: arXiv:1711.00591

  28. Fu X, Zeng D, Huang Y et al (2016) A weighted variational model for simultaneous reflectance and illumination estimation. In Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 2782-2790

  29. Guo X, Li Y, Ling H (2016) LIME: Low-light image enhancement via illumination map estimation. IEEE Trans Image Process 26(2):982–993

    Article  ADS  MathSciNet  Google Scholar 

  30. Fu X, Zeng D, Huang Y et al (2016) A fusion-based enhancing method for weakly illuminated images. Signal Process 129:82–96

    Article  Google Scholar 

  31. Cai B, Xu X, Guo K et al (2017) A joint intrinsic-extrinsic prior model for retinex. In Proc IEEE/CVF Conf Comput Vis Pattern Recognit, pp. 4000–4009

  32. Li M, Liu J, Yang W et al (2018) Structure-revealing low-light image enhancement via robust retinex model. IEEE Trans Image Process 27(6):2828–2841

    Article  ADS  MathSciNet  PubMed  Google Scholar 

  33. Rahman Z, Aamir M, Pu YF et al (2018) A smart system for low-light image enhancement with color constancy and detail manipulation in complex light environments. Symmetry 10(12):718

    Article  ADS  Google Scholar 

  34. Wang W, Wei C, Yang W, et al (2018) Gladnet: Low-light enhancement network with global awareness. In 2018 13th IEEE international conference on automatic face and gesture recognition (FG 2018). IEEE

  35. Li C, Guo C, Loy CC (2021) Learning to enhance low-light image via zero-reference deep curve estimation. IEEE Trans Pattern Anal Mach Intell 44(8):4225–4238

    Google Scholar 

  36. Moran S, et al. (2020) Deeplpf: deep local parametric filters for image enhancement. In Proc IEEE/CVF Conf Comput Vis Pattern Recognit, pp. 12826-12835

  37. Zhang F, et al (2021) Learning temporal consistency for low light video enhancement from single images. In Proc IEEE/CVF Conf Comput Vis Pattern Recognit, pp. 4967–4976

  38. Lu K, Zhang L (2020) TBEFN: a two-branch exposure-fusion network for low-light image enhancement. IEEE Trans Multimed 23:4093–4105

    Article  Google Scholar 

  39. Zheng S, Gupta G (2022) Semantic-guided zero-shot learning for low-light image/video enhancement. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision., pp. 581–590

  40. Chollet F (2017) Xception: deep learning with depthwise separable convolutions. Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 1251–1258

  41. Howard AG, et al (2017) MobileNets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861

  42. Guo M-H et al (2022) Visual attention network. arXiv preprint arXiv:2202.09741

  43. Yu F, Vladlen K (2016) Multi-scale context aggregation by dilated convolutions. International Conference on Learning Representations (ICLR)

  44. Luo W, Li Y, Urtasun R (2016) Understanding the effective receptive field in deep convolutional neural networks. Adv Neural Inf Process Syst 29:2

    Google Scholar 

  45. Sandler M, et al. (2018) Mobilenetv2: Inverted residuals and linear bottlenecks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

  46. Hendrycks D, Kevin G (2020) Gaussian error linear units (GELUs). International Conference on Learning Representations (ICLR)

  47. Devlin J et al (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805

  48. Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT Press, Cambridge

    Google Scholar 

  49. Bengio Y, Courville A, Vincent P (2013) Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell 35(8):1798–1828

    Article  PubMed  Google Scholar 

  50. Zhang C, Bengio S, Hardt M, Recht B, Vinyals O (2016) Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530

  51. He K et al (2016) Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition

  52. Szegedy C et al (2015) Going deeper with convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

  53. Wu H, Qu Y, Lin S (2021) Contrastive learning for compact single image dehazing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 10551–10560)

  54. Wang Z, Bovik AC, Sheikh HR et al (2004) Image quality assessment: from error measurement to structural similarity. IEEE Trans Image Process 13:1

    Article  Google Scholar 

  55. Sheikh HR, Bovik AC (2006) Image information and visual quality. IEEE Trans Image Process 15(2):430–444

    Article  ADS  PubMed  Google Scholar 

  56. Wang Z, Bovik AC (2002) A universal image quality index. IEEE Signal Process Lett 9(3):81–84

    Article  ADS  CAS  Google Scholar 

  57. Wang Z, Simoncelli EP, Bovik AC (2003) Multiscale structural similarity for image quality assessment. In Proc. IEEE 37th Asilomar Conf. Signals. Syst. Comput. 2:1398–1402

  58. Zhou J, Civco DL, Silander JA (1998) A wavelet transform method to merge landsat TM and SPOT panchromatic data. Int J Remote Sens 19(4):743–757

    Article  Google Scholar 

  59. Yuhas RH, Goetz AFH, Boardman JW (1992) Discrimination among semi-arid landscape endmembers using the spectral angle mapper (SAM) algorithm. In Proc. Summaries 3rd Annu. JPL Airborne Geosci. Workshop, vol. 1

  60. Dong X, Wang G, Pang Y et al (2011) Fast efficient algorithm for enhancement of low lighting video. 2011 IEEE International Conference on Multimedia and Expo, pp. 1–6, IEEE

  61. Kwon D, Kim G, Kwon J (2020) DALE: dark region-aware low-light image enhancement. arXiv preprint arXiv:2008.12493

  62. Wei C et al (2018) Deep retinex decomposition for low-light enhancement. arXiv preprint arXiv:1808.04560

  63. Cai J, Gu S, Zhang L (2018) Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans Image Process 27(4):2049–2062

    Article  ADS  MathSciNet  Google Scholar 

  64. Banan A et al (2020) Deep learning-based appearance features extraction for automated carp species identification. Aquacult Eng 89:102053

    Article  Google Scholar 

  65. Afan HA, El-Shafie A, Yaseen ZM et al (2021) Modeling the fluctuations of groundwater level by employing ensemble deep learning techniques. Eng Appl Comput Fluid Mech 15(1):1420–1439

    Google Scholar 

  66. Chen CC, Chen LC, Chang LY et al (2022) Forecast of rainfall distribution based on fixed sliding window long short-term memory. Eng Appl Comput Fluid Mech 16(1):248–261

    Google Scholar 

  67. Wang WC, Chau KW, Cheng CT et al (2021) An ensemble hybrid forecasting model for annual runoff based on sample entropy, secondary decomposition, and long short-term memory neural network. Water Resour Manage 35(14):4695–4726

    Article  Google Scholar 

  68. Chen WB, Liang Z, Chau KW et al (2022) Accurate discharge coefficient prediction of streamlined weirs by coupling linear regression and deep convolutional gated recurrent unit. Eng Appl Comput Fluid Mech 16(1):965–976

    Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China under Grants 62066047, 61966037, and Yunnan Province University Key Laboratory Construction Plan Funding, China.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dongming Zhou.

Ethics declarations

Conflicts of interest

The authors have no conflicts of interest to declare that are relevant to the content of this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, S., Zhou, D. Efficient low-light image enhancement with model parameters scaled down to 0.02M. Int. J. Mach. Learn. & Cyber. 15, 1575–1589 (2024). https://doi.org/10.1007/s13042-023-01983-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-023-01983-7

Keywords

Navigation