Skip to main content
Log in

Low-light image enhancement using transformer with color fusion and channel attention

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

Low-light image enhancement aims to optimize images captured in low-light conditions with low brightness and contrast, rendering them natural-looking images that are more aligned with the human visual system. However, existing methods could not simultaneously solve the problems of color distortion, noise amplification and loss of details during the enhancement process. To this end, we propose a novel low-light image enhancement network, referred to as U-shape transformer with color fusion (CF-UFormer), which employs transformer block as its fundamental element and comprises three modules: feature extraction module (FEM), U-former structure, and refinement module. Firstly, FEM leverages three color spaces with different color gamuts to extract shallow features, thus retaining a wealth of color and detail information in the enhanced image. In addition, we take the channel attention mechanism to the U-Former structure to compensate for the lack of spatial dimension information interaction, which can suppress the noise amplification caused by continuous downsampling through adaptively learning the weight parameters between channels. Finally, to deal with the single expression ability of the \({L_{1}}\) loss function used in most existing methods, CF-UFormer selects four loss functions to train on the LOL dataset, resulting in excellent qualitative and quantitative evaluation criteria on various benchmark datasets. The codes and models are available at .

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Availability of data and materials

The codes and datasets included during the current study are available in the project website: https://github.com/sunyinbang/CF-UFormer.

Notes

  1. https://daooshee.github.io/BMVC2018website/.

  2. https://github.com/abcdef2000/R2RNet.

References

  1. Krizhevsky A, Sutskever I, Hinton GE (2017) ImageNet classification with deep convolutional neural networks. Commun ACM 60(6):84–90

    Article  Google Scholar 

  2. Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C-Y, Berg AC (2016) SSD: single shot multibox detector. In: The European Conference on Computer Vision Workshops (ECCV), pp 21–37

  3. Long J, Shelhamer E, Darrell T (2015) Fully convolutional networks for semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 3431–3440

  4. LeCun Y, Boser B, Denker JS, Henderson D, Howard RE, Hubbard W, Jackel LD (1989) Backpropagation applied to handwritten zip code recognition. Neural Comput 1(4):541–551

    Article  Google Scholar 

  5. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I (2017) Attention is all you need. In: Advances in Neural Information Processing Systems, pp 5998–6008

  6. Wang T, Zhang K, Shen T, Luo W, Stenger B, Lu T (2022) Ultra-high-definition low-light image enhancement: a benchmark and transformer-based method. arXiv preprint arXiv:2212.11548

  7. Guo C, Li C, Guo J, Loy CC, Hou J, Kwong S, Cong R (2020) Zero-reference deep curve estimation for low-light image enhancement. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 1780–1789

  8. Cheng H-D, Shi X (2004) A simple and effective histogram equalization approach to image enhancement. Digit Signal Process 14(2):158–170

    Article  Google Scholar 

  9. Stark JA (2000) Adaptive image contrast enhancement using generalizations of histogram equalization. IEEE Trans Image Process 9(5):889–896

    Article  Google Scholar 

  10. Reza AM (2004) Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement. J VLSI Signal Process Syst Signal Image Video Technol 38:35–44

    Article  Google Scholar 

  11. Yang P, Gong Y (2022) Uneven illumination image matching algorithm combined with single-parameter homomorphic filtering. In: Thirteenth International Conference on Graphics and Image Processing (ICGIP), vol 12083, pp 355–363

  12. Yang J, Cheng Q, Liu J (2023) Retinex-SIE: self-supervised low-light image enhancement method based on Retinex and homomorphic filtering transformation. In: Third International Conference on Artificial Intelligence and Computer Engineering (ICAICE), vol 12610, pp 353–358

  13. Chavarín Á, Cuevas E, Avalos O, Gálvez J, Perez M (2023) Contrast enhancement in images by homomorphic filtering and cluster-chaotic optimization. IEEE Access

  14. Gamini S, Kumar SS (2023) Homomorphic filtering for the image enhancement based on fractional-order derivative and genetic algorithm. Comput Electr Eng 106:108566

    Article  Google Scholar 

  15. Land EH, McCann JJ (1971) Lightness and retinex theory. JOSA 61(1):1–11

    Article  Google Scholar 

  16. He K, Sun J, Tang X (2010) Single image haze removal using dark channel prior. IEEE Trans Pattern Anal Mach Intell 33(12):2341–2353

    Google Scholar 

  17. Jobson DJ, Rahman Z-U, Woodell GA (1997) Properties and performance of a center/surround Retinex. IEEE Trans Image Process 6(3):451–462

    Article  Google Scholar 

  18. Guo X, Li Y, Ling H (2016) LIME: low-light image enhancement via illumination map estimation. IEEE Trans Image Process 26(2):982–993

    Article  MathSciNet  Google Scholar 

  19. Fu X, Zeng D, Huang Y, Zhang X-P, Ding X (2016) A weighted variational model for simultaneous reflectance and illumination estimation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 2782–2790

  20. Lore KG, Akintayo A, Sarkar S (2017) LLNeT: a deep autoencoder approach to natural low-light image enhancement. Pattern Recognit 61:650–662

    Article  Google Scholar 

  21. Chen C, Chen Q, Xu J, Koltun V (2018) Learning to see in the dark. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 3291–3300

  22. Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention (MICCAI). Springer, pp 234–241

  23. Shen L, Yue Z, Feng F, Chen Q, Liu S, Ma J (2017) MSR-NET: low-light image enhancement using deep convolutional network. arXiv preprint arXiv:1711.02488

  24. Zhang Y, Zhang J, Guo X (2019) Kindling the darkness: a practical low-light image enhancer. In: Proceedings of the 27th ACM International Conference on Multimedia, pp 1632–1640

  25. Hai J, Xuan Z, Yang R, Hao Y, Zou F, Lin F, Han S (2023) R2RNET: low-light image enhancement via real-low to real-normal network. J Vis Commun Image Represent 90:103712

    Article  Google Scholar 

  26. Ma L, Ma T, Liu R, Fan X, Luo Z (2022) Toward fast, flexible, and robust low-light image enhancement. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 5637–5646

  27. Jiang Y, Gong X, Liu D, Cheng Y, Fang C, Shen X, Yang J, Zhou P, Wang Z (2021) EnlightenGAN: deep light enhancement without paired supervision. IEEE Trans Image Process 30:2340–2349

    Article  Google Scholar 

  28. Fu Z, Yang Y, Tu X, Huang Y, Ding X, Ma K-K (2023) Learning a simple low-light image enhancer from paired low-light instances. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 22252–22261

  29. Chen H, Wang Y, Guo T, Xu C, Deng Y, Liu Z, Ma S, Xu C, Xu C, Gao W (2021) Pre-trained image processing transformer. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 12299–12310

  30. Wang Z, Cun X, Bao J, Zhou W, Liu J, Li H (2022) Uformer: a general u-shaped transformer for image restoration. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 17683–17693

  31. Zamir SW, Arora A, Khan S, Hayat M, Khan FS, Yang M-H (2022) Restormer: efficient transformer for high-resolution image restoration. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 5728–5739

  32. Yang H, Zhou D, Cao J, Zhao Q, Li M (2023) Rainformer: a pyramid transformer for single image deraining. J Supercomput 79(6):6115–6140

    Article  Google Scholar 

  33. Zhang S, Meng N, Lam EY (2023) LRT: an efficient low-light restoration transformer for dark light field images. IEEE Trans Image Process

  34. Yang S, Zhou D, Cao J, Guo Y (2022) Rethinking low-light enhancement via transformer-GAN. IEEE Signal Process Lett 29:1082–1086

    Article  Google Scholar 

  35. Cui H, Li J, Hua Z, Fan L (2022) TPET: two-stage perceptual enhancement transformer network for low-light image enhancement. Eng Appl Artif Intell 116:105411

    Article  Google Scholar 

  36. Wei C, Wang W, Yang W, Liu J (2018) Deep Retinex decomposition for low-light enhancement. In: British Machine Vision Conference (BMVC)

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China under Grants 62172137, 61976042 and 61972068, by the Liaoning Revitalization Talents Program under Grant XLYC2007023, and by the Dalian Youth Science and Technology Star Program under Grant 2022RQ086.

Author information

Authors and Affiliations

Authors

Contributions

FS and JS conceived this study. YS and FW conducted the experiment and wrote the initial manuscript. HL and JS reviewed and edited it.

Corresponding author

Correspondence to Jing Sun.

Ethics declarations

Conflict of interest

All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.

Ethics approval

There is no ethical approval needed in the present study.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sun, Y., Sun, J., Sun, F. et al. Low-light image enhancement using transformer with color fusion and channel attention. J Supercomput (2024). https://doi.org/10.1007/s11227-024-06177-8

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11227-024-06177-8

Keywords

Navigation