Abstract
Low-light image enhancement aims to optimize images captured in low-light conditions with low brightness and contrast, rendering them natural-looking images that are more aligned with the human visual system. However, existing methods could not simultaneously solve the problems of color distortion, noise amplification and loss of details during the enhancement process. To this end, we propose a novel low-light image enhancement network, referred to as U-shape transformer with color fusion (CF-UFormer), which employs transformer block as its fundamental element and comprises three modules: feature extraction module (FEM), U-former structure, and refinement module. Firstly, FEM leverages three color spaces with different color gamuts to extract shallow features, thus retaining a wealth of color and detail information in the enhanced image. In addition, we take the channel attention mechanism to the U-Former structure to compensate for the lack of spatial dimension information interaction, which can suppress the noise amplification caused by continuous downsampling through adaptively learning the weight parameters between channels. Finally, to deal with the single expression ability of the \({L_{1}}\) loss function used in most existing methods, CF-UFormer selects four loss functions to train on the LOL dataset, resulting in excellent qualitative and quantitative evaluation criteria on various benchmark datasets. The codes and models are available at .
Similar content being viewed by others
Availability of data and materials
The codes and datasets included during the current study are available in the project website: https://github.com/sunyinbang/CF-UFormer.
References
Krizhevsky A, Sutskever I, Hinton GE (2017) ImageNet classification with deep convolutional neural networks. Commun ACM 60(6):84–90
Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C-Y, Berg AC (2016) SSD: single shot multibox detector. In: The European Conference on Computer Vision Workshops (ECCV), pp 21–37
Long J, Shelhamer E, Darrell T (2015) Fully convolutional networks for semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 3431–3440
LeCun Y, Boser B, Denker JS, Henderson D, Howard RE, Hubbard W, Jackel LD (1989) Backpropagation applied to handwritten zip code recognition. Neural Comput 1(4):541–551
Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I (2017) Attention is all you need. In: Advances in Neural Information Processing Systems, pp 5998–6008
Wang T, Zhang K, Shen T, Luo W, Stenger B, Lu T (2022) Ultra-high-definition low-light image enhancement: a benchmark and transformer-based method. arXiv preprint arXiv:2212.11548
Guo C, Li C, Guo J, Loy CC, Hou J, Kwong S, Cong R (2020) Zero-reference deep curve estimation for low-light image enhancement. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 1780–1789
Cheng H-D, Shi X (2004) A simple and effective histogram equalization approach to image enhancement. Digit Signal Process 14(2):158–170
Stark JA (2000) Adaptive image contrast enhancement using generalizations of histogram equalization. IEEE Trans Image Process 9(5):889–896
Reza AM (2004) Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement. J VLSI Signal Process Syst Signal Image Video Technol 38:35–44
Yang P, Gong Y (2022) Uneven illumination image matching algorithm combined with single-parameter homomorphic filtering. In: Thirteenth International Conference on Graphics and Image Processing (ICGIP), vol 12083, pp 355–363
Yang J, Cheng Q, Liu J (2023) Retinex-SIE: self-supervised low-light image enhancement method based on Retinex and homomorphic filtering transformation. In: Third International Conference on Artificial Intelligence and Computer Engineering (ICAICE), vol 12610, pp 353–358
Chavarín Á, Cuevas E, Avalos O, Gálvez J, Perez M (2023) Contrast enhancement in images by homomorphic filtering and cluster-chaotic optimization. IEEE Access
Gamini S, Kumar SS (2023) Homomorphic filtering for the image enhancement based on fractional-order derivative and genetic algorithm. Comput Electr Eng 106:108566
Land EH, McCann JJ (1971) Lightness and retinex theory. JOSA 61(1):1–11
He K, Sun J, Tang X (2010) Single image haze removal using dark channel prior. IEEE Trans Pattern Anal Mach Intell 33(12):2341–2353
Jobson DJ, Rahman Z-U, Woodell GA (1997) Properties and performance of a center/surround Retinex. IEEE Trans Image Process 6(3):451–462
Guo X, Li Y, Ling H (2016) LIME: low-light image enhancement via illumination map estimation. IEEE Trans Image Process 26(2):982–993
Fu X, Zeng D, Huang Y, Zhang X-P, Ding X (2016) A weighted variational model for simultaneous reflectance and illumination estimation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 2782–2790
Lore KG, Akintayo A, Sarkar S (2017) LLNeT: a deep autoencoder approach to natural low-light image enhancement. Pattern Recognit 61:650–662
Chen C, Chen Q, Xu J, Koltun V (2018) Learning to see in the dark. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 3291–3300
Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention (MICCAI). Springer, pp 234–241
Shen L, Yue Z, Feng F, Chen Q, Liu S, Ma J (2017) MSR-NET: low-light image enhancement using deep convolutional network. arXiv preprint arXiv:1711.02488
Zhang Y, Zhang J, Guo X (2019) Kindling the darkness: a practical low-light image enhancer. In: Proceedings of the 27th ACM International Conference on Multimedia, pp 1632–1640
Hai J, Xuan Z, Yang R, Hao Y, Zou F, Lin F, Han S (2023) R2RNET: low-light image enhancement via real-low to real-normal network. J Vis Commun Image Represent 90:103712
Ma L, Ma T, Liu R, Fan X, Luo Z (2022) Toward fast, flexible, and robust low-light image enhancement. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 5637–5646
Jiang Y, Gong X, Liu D, Cheng Y, Fang C, Shen X, Yang J, Zhou P, Wang Z (2021) EnlightenGAN: deep light enhancement without paired supervision. IEEE Trans Image Process 30:2340–2349
Fu Z, Yang Y, Tu X, Huang Y, Ding X, Ma K-K (2023) Learning a simple low-light image enhancer from paired low-light instances. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 22252–22261
Chen H, Wang Y, Guo T, Xu C, Deng Y, Liu Z, Ma S, Xu C, Xu C, Gao W (2021) Pre-trained image processing transformer. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 12299–12310
Wang Z, Cun X, Bao J, Zhou W, Liu J, Li H (2022) Uformer: a general u-shaped transformer for image restoration. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 17683–17693
Zamir SW, Arora A, Khan S, Hayat M, Khan FS, Yang M-H (2022) Restormer: efficient transformer for high-resolution image restoration. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 5728–5739
Yang H, Zhou D, Cao J, Zhao Q, Li M (2023) Rainformer: a pyramid transformer for single image deraining. J Supercomput 79(6):6115–6140
Zhang S, Meng N, Lam EY (2023) LRT: an efficient low-light restoration transformer for dark light field images. IEEE Trans Image Process
Yang S, Zhou D, Cao J, Guo Y (2022) Rethinking low-light enhancement via transformer-GAN. IEEE Signal Process Lett 29:1082–1086
Cui H, Li J, Hua Z, Fan L (2022) TPET: two-stage perceptual enhancement transformer network for low-light image enhancement. Eng Appl Artif Intell 116:105411
Wei C, Wang W, Yang W, Liu J (2018) Deep Retinex decomposition for low-light enhancement. In: British Machine Vision Conference (BMVC)
Acknowledgements
This work was supported by the National Natural Science Foundation of China under Grants 62172137, 61976042 and 61972068, by the Liaoning Revitalization Talents Program under Grant XLYC2007023, and by the Dalian Youth Science and Technology Star Program under Grant 2022RQ086.
Author information
Authors and Affiliations
Contributions
FS and JS conceived this study. YS and FW conducted the experiment and wrote the initial manuscript. HL and JS reviewed and edited it.
Corresponding author
Ethics declarations
Conflict of interest
All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.
Ethics approval
There is no ethical approval needed in the present study.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Sun, Y., Sun, J., Sun, F. et al. Low-light image enhancement using transformer with color fusion and channel attention. J Supercomput (2024). https://doi.org/10.1007/s11227-024-06177-8
Accepted:
Published:
DOI: https://doi.org/10.1007/s11227-024-06177-8