Low Illumination Image Enhancement Based on U-Net

  • Mengxing LiEmail author
  • Suyu Wang
Conference paper
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 551)


Low illumination image has the characteristics of low overall brightness, contrast and signal-to-noise ratio. The classical image enhancement algorithms have limited enhancement effect and need to adjust parameters manually. In this paper, a deep full convolutional coding-decoder based on U-type network is proposed to solve the problem of low illumination image degradation. The experimental results show that, compared with the existing mainstream image enhancement algorithms, the proposed algorithm can improve the brightness and contrast adaptively, avoid artifacts on image edges, and further improve the objective evaluation index and subjective evaluation.


Image enhancement Convolutional neural network Instance normalization 



This work is supported by the National Natural Science Foundation of China (No.61201361), Science Foundation of the Beijing Education Commission (KM201710005011), and Training Program Foundation for the Talents in Beijing City (2013D005015000008).


  1. 1.
    Drago, F., Myszkowski, K., Annen, T., et al.: Adaptive logarithmic mapping for displaying high contrast scenes. Comput. Graph. Forum 22(3), 419–426 (2003)CrossRefGoogle Scholar
  2. 2.
    Zhang, Q., Nie, Y., Zhang, L., et al.: Underexposed video-enhancement via perception-driven progressive fusion. IEEE Trans. Vis. Comput. Graph. 22, 1773–1785 (2015)CrossRefGoogle Scholar
  3. 3.
    Zhao, Q.: A wavelet denoising method of new adjustable threshold. In: Proceedings of 2015 IEEE 16th International Conference on Communication Technology (ICCT). Hangzhou Dianzi University, Chinese Institute of Electronics, p. 5 (2015)Google Scholar
  4. 4.
    Kim, M., Yang, I., Kim, M., Yu, H.: Histogram equalization using a reduced feature set of background speakers’ utterances for speaker recognition. Front. Inf. Technol. Electron. Eng. 18(05), 738–750 (2017)CrossRefGoogle Scholar
  5. 5.
    Huang, S.C., Cheng, F.C., Chiu, Y.S.: Efficient contrast enhancement using adaptive gamma correction with weighting distribution. IEEE Trans. Image Process. 22(3), 1032–1041 (2013)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Lin, H., Shi, Z.: Multi-scale retinex improvement for nighttime image enhancement. Opt.-Int. J. Light. Electron Opt. 125(24), 7143–7148 (2014)CrossRefGoogle Scholar
  7. 7.
    Malm, H., Oskarsson, M., Warrant, E., et al.: Adaptive enhancement and noise reduction in very low light-level video. In: IEEE 11th International Conference on, Computer Vision, pp. 1–8 (2007)Google Scholar
  8. 8.
    Zhang, S., Wen, L., Bian, X., et al.: Single-shot refinement neural network for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4203–4212 (2018)Google Scholar
  9. 9.
    Singh, B.,, Li, H., Sharma, A., et al.: R-FCN-3000 at 30fps: decoupling detection and classification, In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1081–1090 (2018)Google Scholar
  10. 10.
    Dvornik, N., Mairal, J., Schmid, C.: On the importance of visual context for data augmentation in scene understanding. arXiv preprint arXiv:1809.02492 (2018)
  11. 11.
    Lehtinen, J., Munkberg, J., Hasselgren, J., et al.: Noise2noise: learning image restoration without clean data. arXiv preprint arXiv:1803.04189 (2018)
  12. 12.
    Lore, K.G., Akintayo, A., Sarkar, S.: LLNet: a deep autoencoder approach to natural low-light image enhancement. Pattern Recognit. 61, 650–662 (2015)CrossRefGoogle Scholar
  13. 13.
    Krichevsky, A., Sutskever, I., Hinton, G.: ImageNet classification with deep convolutional neural networks. Adv. Neural. Inf. Process. Syst. 25(2), 1097–1105 (2012)Google Scholar
  14. 14.
    He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition (2015)Google Scholar
  15. 15.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. Comput. Sci. (2014)Google Scholar
  16. 16.
    Szegedy, C., Liu, W., Jia, Y., et al.: Going deeper with convolutions (2014)Google Scholar
  17. 17.
    Iandola, F.N., Han, S., Moskewicz, M.W., et al.: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size (2016)Google Scholar
  18. 18.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on International Conference on Machine Learning. (2015)Google Scholar
  19. 19.
    Ulyanov, D., Vedaldi, A., Lempitsky, V.: Instance normalization: the missing ingredient for fast stylization (2016)Google Scholar
  20. 20.
    Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)Google Scholar
  21. 21.
    Gonzalez, R.C., Woods, R.E.: Digital Image Processing. Prentice-Hall, Englewood Cliffs (2017)Google Scholar
  22. 22.
    Jobson, D.J., Rahman, Z., Woodell, G.A.: A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 6(7), 965–976 (1997)CrossRefGoogle Scholar
  23. 23.
    Wang, S., Zheng, J., Hu, H.M., et al.: Naturalness preserved enhancement algorithm for nonuniform illumination images. IEEE Trans. Image Process. 22(9), 3538 (2013)CrossRefGoogle Scholar
  24. 24.
    Han, Y., Cai, Y., Cao, Y., et al.: A new image fusion performance metric based on visual information fidelity. Inf. Fusion 14(2), 127–135 (2013)CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  1. 1.Beijing Engineering Research Center for IoT Software and SystemsBeijingChina
  2. 2.Faculty of Information TechnologyBeijing University of TechnologyBeijingChina

Personalised recommendations