Skip to main content
Log in

CRetinex: A Progressive Color-Shift Aware Retinex Model for Low-Light Image Enhancement

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Low-light environments introduce various complex degradations into captured images. Retinex-based methods have demonstrated effective enhancement performance by decomposing an image into illumination and reflectance, allowing for selective adjustment and removal of degradations. However, different types of pollutions in reflectance are often treated together. The absence of explicit distinction and definition of various pollution types results in residual pollutions in the results. Typically, the color shift, which is generally spatially invariant, differs from other spatially variant pollution and proves challenging to eliminate with denoising methods. The remaining color shift compromises color constancy both theoretically and in practice. In this paper, we consider different manifestations of degradations and further decompose them. We propose a color-shift aware Retinex model, termed as CRetinex, which decomposes an image into reflectance, color shift, and illumination. Specific networks are designed to remove spatially variant pollution, correct color shift, and adjust illumination separately. Comparative experiments with the state-of-the-art demonstrate the qualitative and quantitative superiority of our approach. Furthermore, extensive experiments on multiple datasets, including real and synthetic images, along with extended validation, confirm the effectiveness of color-shift aware decomposition and the generalization of CRetinex over a wide range of low-light levels.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23

Similar content being viewed by others

Data Availability

The data or code during the current study are available from the corresponding author on reasonable request.

Notes

  1. https://daooshee.github.io/BMVC2018website/.

  2. https://phi-ai.buaa.edu.cn/project/AgLLNet/index.htm.

  3. https://github.com/cchen156/Learning-to-See-in-the-Dark.

  4. https://drive.google.com/file/d/0BwVzAzXoqrSXb3prWUV1YzBjZzg/view?pli=1 &resourcekey=0-VZXvwdwr7QbH3FoX10yPXg.

  5. https://www.flickr.com/photos/73847677@N02/sets/72157718844828948/.

  6. https://bupt-ai-cz.github.io/LLVIP/.

References

  • Banik, P. P., Saha, R., & Kim, K. D. (2018). Contrast enhancement of low-light image using histogram equalization and illumination adjustment. In Proceedings of the international conference on electronics, information, and communication (ICEIC) (pp. 1–4).

  • Cai, R., & Chen, Z. (2023). Brain-like retinex: A biologically plausible retinex algorithm for low light image enhancement. Pattern Recognition, 136, 109,195.

    Article  Google Scholar 

  • Chen, C., Chen, Q., Xu, J., & Koltun, V. (2018). Learning to see in the dark. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 3291–3300).

  • Dabov, K., Foi, A., Katkovnik, V., & Egiazarian, K. (2006). Image denoising with block-matching and 3d filtering. In Image processing: Algorithms and systems, neural networks, and machine learning (Vol. 6064, pp. 354–365).

  • Fu, X., Zeng, D., Huang, Y., Zhang, X. P., & Ding, X. (2016). A weighted variational model for simultaneous reflectance and illumination estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 2782–2790).

  • Gauglitz, S., Höllerer, T., & Turk, M. (2011). Evaluation of interest point detectors and feature descriptors for visual tracking. International Journal of Computer Vision, 94, 335–360.

    Article  Google Scholar 

  • Guo, C., Li, C., Guo, J., Loy, C. C., Hou, J., Kwong, S., & Cong, R. (2020). Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 1780–1789).

  • Guo, X., & Hu, Q. (2023). Low-light image enhancement via breaking down the darkness. International Journal of Computer Vision, 131(1), 48–66.

    Article  Google Scholar 

  • Guo, X., Li, Y., & Ling, H. (2016). Lime: Low-light image enhancement via illumination map estimation. IEEE Transactions on Image Processing, 26(2), 982–993.

    Article  MathSciNet  Google Scholar 

  • Haghighat, M. B. A., Aghagolzadeh, A., & Seyedarabi, H. (2011). A non-reference image fusion metric based on mutual information of image features. Computers & Electrical Engineering, 37(5), 744–756.

    Article  Google Scholar 

  • Han, Y., Cai, Y., Cao, Y., & Xu, X. (2013). A new image fusion performance metric based on visual information fidelity. Information Fusion, 14(2), 127–135.

    Article  Google Scholar 

  • Huang, Z., Yang, S., Zhou, M., Li, Z., Gong, Z., & Chen, Y. (2022). Feature map distillation of thin nets for low-resolution object recognition. IEEE Transactions on Image Processing, 31, 1364–1379.

    Article  Google Scholar 

  • Jeong, I., & Lee, C. (2021). An optimization-based approach to gamma correction parameter estimation for low-light image enhancement. Multimedia Tools and Applications, 80, 18027–18042.

    Article  Google Scholar 

  • Jiang, Y., Gong, X., Liu, D., Cheng, Y., Fang, C., Shen, X., Yang, J., Zhou, P., & Wang, Z. (2021). Enlightengan: Deep light enhancement without paired supervision. IEEE Transactions on Image Processing, 30, 2340–2349.

    Article  Google Scholar 

  • Jobson, D. J., Rahman, Z. U., & Woodell, G. A. (1997a). A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Transactions on Image Processing, 6(7), 965–976.

    Article  Google Scholar 

  • Jobson, D. J., Rahman, Z. U., & Woodell, G. A. (1997b). Properties and performance of a center/surround retinex. IEEE Transactions on Image Processing, 6(3), 451–462.

    Article  Google Scholar 

  • Land, E. H. (1977). The retinex theory of color vision. Scientific American, 237(6), 108–129.

    Article  MathSciNet  Google Scholar 

  • Land, E. H. (1986). An alternative technique for the computation of the designator in the retinex theory of color vision. Proceedings of the National Academy of Sciences, 83(10), 3078–3080.

    Article  Google Scholar 

  • Lee, C., Lee, C., & Kim, C. S. (2013). Contrast enhancement based on layered difference representation of 2d histograms. IEEE Transactions on Image Processing, 22(12), 5372–5384.

    Article  Google Scholar 

  • Li, C., Anwar, S., & Porikli, F. (2020). Underwater scene prior inspired deep underwater image and video enhancement. Pattern Recognition, 98, 107038.

    Article  Google Scholar 

  • Li, M., Liu, J., Yang, W., Sun, X., & Guo, Z. (2018). Structure-revealing low-light image enhancement via robust retinex model. IEEE Transactions on Image Processing, 27(6), 2828–2841.

    Article  MathSciNet  Google Scholar 

  • Liu, K., Ye, Z., Guo, H., Cao, D., Chen, L., & Wang, F. Y. (2021). FISS GAN: A generative adversarial network for foggy image semantic segmentation. IEEE/CAA Journal of Automatica Sinica, 8(8), 1428–1439.

    Article  Google Scholar 

  • Lu, K., & Zhang, L. (2021). TBEFN: A two-branch exposure-fusion network for low-light image enhancement. IEEE Transactions on Multimedia, 23, 4093–4105.

    Article  Google Scholar 

  • Lv, F., Li, Y., & Lu, F. (2021). Attention guided low-light image enhancement with a large scale low-light simulation dataset. International Journal of Computer Vision, 129(7), 2175–2193.

    Article  Google Scholar 

  • Ma, L., Ma, T., Liu, R., Fan, X., & Luo, Z. (2022). Toward fast, flexible, and robust low-light image enhancement. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 5637–5646).

  • Moorthy, A. K., & Bovik, A. C. (2010). A two-step framework for constructing blind image quality indices. IEEE Signal Processing Letters, 17(5), 513–516.

    Article  Google Scholar 

  • Ni, Z., Yang, W., Wang, H., Wang, S., Ma, L., & Kwong, S. (2022). Cycle-interactive generative adversarial network for robust unsupervised low-light enhancement. In Proceedings of the ACM international conference on multimedia (pp. 1484–1492).

  • Oza, P., Sindagi, V. A., Sharmini, V. V., & Patel, V. M. (2023). Unsupervised domain adaptation of object detectors: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence. https://doi.org/10.1109/TPAMI.2022.3217046

    Article  Google Scholar 

  • Peng, L., Zhu, C., & Bian, L. (2023). U-shape transformer for underwater image enhancement. In Proceedings of the European conference on computer vision workshops (pp. 290–307).

  • Pizer, S. M. (1990). Contrast-limited adaptive histogram equalization: Speed and effectiveness Stephen M. Pizer, R. Eugene Johnston, James P. Ericksen, Bonnie C. Yankaskas, Keith E. Muller medical image display research group. In Proceedings of the first conference on visualization in biomedical computing (Vol. 337, p. 1).

  • Schettini, R., & Corchs, S. (2010). Underwater image processing: State of the art of restoration and image enhancement methods. EURASIP Journal on Advances in Signal Processing, 2010, 1–14.

    Article  Google Scholar 

  • Wang, R., Zhang, Q., Fu, C. W., Shen, X., Zheng, W. S., & Jia, J. (2019). Underexposed photo enhancement using deep illumination estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 6849–6857).

  • Wang, T., Zhang, K., Shen, T., Luo, W., Stenger, B., & Lu, T. (2023). Ultra-high-definition low-light image enhancement: A benchmark and transformer-based method. In Proceedings of the AAAI conference on artificial intelligence (AAAI).

  • Wang, Y., Wan, R., Yang, W., Li, H., Chau, L. P., & Kot, A. (2022). Low-light image enhancement with normalizing flow. In Proceedings of the AAAI conference on artificial intelligence (AAAI) (Vol. 36, pp. 2604–2612).

  • Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612.

    Article  Google Scholar 

  • Wei, C., Wang, W., Yang, W., & Liu, J. (2018). Deep retinex decomposition for low-light enhancement. In British machine vision conference.

  • Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., & Jiang, J. (2022). Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 5901–5910).

  • Xie, Z., Geng, Z., Hu, J., Zhang, Z., Hu, H., & Cao, Y. (2023). Revealing the dark secrets of masked image modeling. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 14,475–14,485).

  • Xu, G., Wang, X., & Xu, X. (2020). Single image enhancement in sandstorm weather via tensor least square. IEEE/CAA Journal of Automatica Sinica, 7(6), 1649–1661.

    Article  MathSciNet  Google Scholar 

  • Yang, W., Wang, W., Huang, H., Wang, S., & Liu, J. (2021). Sparse gradient regularized deep retinex network for robust low-light image enhancement. IEEE Transactions on Image Processing, 30, 2072–2086.

    Article  Google Scholar 

  • Zhang, H., & Ma, J. (2021). SDNet: A versatile squeeze-and-decomposition network for real-time image fusion. International Journal of Computer Vision, 129, 2761–2785.

    Article  Google Scholar 

  • Zhang, R., Isola, P., Efros, A. A., Shechtman, E., & Wang, O. (2018). The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 586–595).

  • Zhang, Y., Guo, X., Ma, J., Liu, W., & Zhang, J. (2021). Beyond brightening low-light images. International Journal of Computer Vision, 129, 1013–1037.

    Article  Google Scholar 

  • Zhang, Y., Zhang, J., & Guo, X. (2019). Kindling the darkness: A practical low-light image enhancer. In Proceedings of the ACM international conference on multimedia (pp. 1632–1640).

  • Zhou, J., Pang, L., Zhang, D., & Zhang, W. (2023). Underwater image enhancement method via multi-interval subhistogram perspective equalization. IEEE Journal of Oceanic Engineering. https://doi.org/10.1109/JOE.2022.3223733

    Article  Google Scholar 

  • Zhuang, P., Wu, J., Porikli, F., & Li, C. (2022). Underwater image enhancement with hyper-Laplacian reflectance priors. IEEE Transactions on Image Processing, 31, 5442–5455.

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China under Grant No. 62276192.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jiayi Ma.

Ethics declarations

Conflict of interest

The authors declare that they have no Conflict of interest.

Additional information

Communicated by Boxin Shi.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, H., Zhang, H., Yi, X. et al. CRetinex: A Progressive Color-Shift Aware Retinex Model for Low-Light Image Enhancement. Int J Comput Vis (2024). https://doi.org/10.1007/s11263-024-02065-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11263-024-02065-z

Keywords

Navigation