Advertisement

Multimedia Tools and Applications

, Volume 78, Issue 3, pp 3817–3830 | Cite as

Lightness-aware contrast enhancement for images with different illumination conditions

  • Shijie Hao
  • Yanrong GuoEmail author
  • Zhongliang Wei
Article
  • 113 Downloads

Abstract

It has become more convenient to take photographs in our daily life. However, without sufficient skills, we often produce poor photographs with low contrast and unclear details under various imperfect illumination conditions. Although plenty of image enhancing models have been developed, most of them impose a uniform enhancing strength to the whole image region, and thus tend to generate over-enhancement effects for regions with originally-satisfying illumination. To address this issue, we propose a novel contrast enhancing model, which is a simple linear fusion process based on an original image and its initial enhancement. As the key of our model, we construct a lightness map that estimates the scene lightness, which is aware of the image structure at pixel-wise level. In the fusion process, this map dynamically weighs between the initially enhanced image and the original image, and thus ensures a seamless fusion result. In our experiments, we validate our model on images with various illumination conditions, such as strong back light, imbalanced light, and low light. The results empirically show that our model performs well on simultaneously improving image contrast and keeping its naturalness.

Keywords

Image enhancement Lightness map Guided image filter Simplified Retinex model 

Notes

Acknowledgements

The authors sincerely appreciate the efforts of the anonymous reviewers and their useful comments during the reviewing process. The research was supported by the National Nature Science Foundation of China under grant number 61772171, grant number 61702156, and grant number 61632007.

References

  1. 1.
    Chen Y, Klopp J, Sun M, Chien S, Ma K (2017) Learning to Compose with Professional Photographs on the Web. In: Proceedings of ACM Multimedia (ACM MM)Google Scholar
  2. 2.
    Cheng G, Yang C, Yao X, Guo L, Han J (2018) When deep learning meets metric learning: remote sensing image scene classification via learning discriminative CNNs. IEEE Trans Geosci Remote Sens.  https://doi.org/10.1109/TGRS.2017.2783902
  3. 3.
    Deng Y, Loy C, Tang X (2017) Aesthetic-driven image enhancement by adversarial learning. arXiv:1707.05251v1
  4. 4.
    Deng Y, Loy C, Tang X (2017) Image aesthetic assessment: an experimental survey. IEEE Signal Process Mag 34(4):80–106CrossRefGoogle Scholar
  5. 5.
    Dong X, Wang G, Pang Y (2011) Fast efficient algorithm for enhancement of low lighting video. Proc Int Conf Multimed Expo (ICME)Google Scholar
  6. 6.
    Feng Z, Hao S (2017) Low-light image enhancement by refining illumination map with self-guided filtering. Proc Int Conf Big Knowledge (ICBK)Google Scholar
  7. 7.
    Fu X, Zeng D, Huang Y, Liao Y, Ding X, Paisley J (2016) A fusion-based enhancing method for weakly illuminated images. Signal Process 129:82–96CrossRefGoogle Scholar
  8. 8.
    Fu X, Zeng D, Huang Y, Zhang X, Ding X (2016) A weighted variational model for simultaneous reflectance and illumination estimation. Proc Comput Vision Pattern Recogn (CVPR)Google Scholar
  9. 9.
    Gao F, Yu J (2016) Biologically inspired image quality assessment. Signal Process 124:210–219CrossRefGoogle Scholar
  10. 10.
    Guo Y, Wu G, Jiang J, Shen D (2013) Robust anatomical correspondence detection by hierarchical sparse graph matching. IEEE Trans Med Imaging 32(2):268–277CrossRefGoogle Scholar
  11. 11.
    Guo Y, Gao Y, Shen D (2016) Deformable MR prostate segmentation via deep feature learning and sparse patch matching. IEEE Trans Med Imaging 35(4):1077–1089CrossRefGoogle Scholar
  12. 12.
    Guo X, Li Y, Ling H (2017) LIME: low-light image enhancement via illumination map estimation. IEEE Trans Image Process 26(2):982–993MathSciNetCrossRefGoogle Scholar
  13. 13.
    Han J, Zhang D, Cheng G, Liu N, Xu D (2018) Advanced deep-learning techniques for salient and category-specific object detection: a survey. IEEE Signal Process Mag 35(1):84–100CrossRefGoogle Scholar
  14. 14.
    Han J, Quan R, Zhang D, Nie F (2018) Robust object co-segmentation using background prior. IEEE Trans Image Process 27(4):1639–1651MathSciNetCrossRefGoogle Scholar
  15. 15.
    Hao S, Wang M, Hong R, Jiang J (2016) Spatially guided local Laplacian filter for nature image detail enhancement. Multimed Tools Appl 75(3):1529–1542CrossRefGoogle Scholar
  16. 16.
    He K, Sun J (2015) Fast guided filter. ArXiv, abs/150500996Google Scholar
  17. 17.
    He K, Sun J, Tang X (2011) Single image haze removal using Dark Channel prior. IEEE Trans Pattern Anal Mach Intell 33(12):2341–2353CrossRefGoogle Scholar
  18. 18.
    He K, Sun J, Tang X (2013) Guided image filtering. IEEE Trans Pattern Anal Mach Intell 35(6):1397–1409CrossRefGoogle Scholar
  19. 19.
    Hong R, Zhang L, Tao D (2016) Unified photo enhancement by discovering aesthetic communities from Flickr. IEEE Trans Image Process 25(3):1124–1135MathSciNetCrossRefGoogle Scholar
  20. 20.
    Hong R, Zhang L, Zhang C, Zimmermann R (2016) Flickr circles: aesthetic tendency discovery by multi-view regularized topic modeling. IEEE Trans Multimed 18(8):1555–1567CrossRefGoogle Scholar
  21. 21.
    Lee C, Lee C, Kim C (2013) Contrast enhancement based on layered difference representation of 2D histograms. IEEE Trans Image Process 22(12):5372–5384CrossRefGoogle Scholar
  22. 22.
    Ni B, Xu M, Wang M, Yan S, Tian Q (2013) Learning to photograph: a compositional perspective. IEEE Trans Multimed 15(5):1138–1151CrossRefGoogle Scholar
  23. 23.
    Reza AM (2004) Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement, journal VLSI signal processing system for signal. Image Video Technol 38(1):35–44Google Scholar
  24. 24.
    Song J, Zhang L, Shen P, Peng X, Zhu G (2016) Single low-light image enhancement using luminance map. Proc Chin Conf Pattern Recogn (CCPR)Google Scholar
  25. 25.
    Tan R (2008) Visibility in bad weather from a single image. Proc Comput Vision Pattern Recogn (CVPR)Google Scholar
  26. 26.
    Tao X, Zhou C, Shen X, Wang J, Jia J (2017) Zero-order reverse filtering. Proc Int Conf Comput Vision (ICCV)Google Scholar
  27. 27.
    Thung K, Raveendran P, Lim C (2013) Content-based image quality metric using similarity measure of moment vectors. Pattern Recogn 45(6):2193–2204CrossRefGoogle Scholar
  28. 28.
    Thung K, Adeli E, Yap P, Shen D (2016) Stability-weighted matrix completion of incomplete multi-modal data for disease diagnosis. Proc Med Image Comput Comput Assist Interven (MICCAI)Google Scholar
  29. 29.
    Xu L, Yan Q, Xia Y, Jia J (2013) Structure extraction from texture via relative Total variation. ACM Trans Graph 31(6):Article 139Google Scholar
  30. 30.
    Yao X, Han J, Zhang D, Nie F (2017) Revisiting co-saliency detection: a novel approach based on two-stage multi-view spectral rotation co-clustering. IEEE Trans Image Process 26(7):3196–3209MathSciNetCrossRefGoogle Scholar
  31. 31.
    Yi Z, Zhang H, Tan P, Gong M (2017) DualGAN: unsupervised dual learning for image-to-image translation. Proc Int Conf Comput Vision (ICCV)Google Scholar
  32. 32.
    Yin W, Mei T, Chen C, Li S (2014) Socialized mobile photography: learning to photograph with social context via mobile devices. IEEE Trans Multimed 16(1):184–200CrossRefGoogle Scholar
  33. 33.
    Yu Z, Wu F, Zhang Y, Tang S, Shao J, Zhuang Y (2014) Hashing with list-wise learning to rank, In: proceedings of ACM SIGIR conference on research & development in information retrieval (SIGIR)Google Scholar
  34. 34.
    Yu Z, Yu J, Fan J, Tao D (2017) Multi-modal factorized bilinear pooling with co-attention learning for visual question answering. Proc Int Conf Comput Vision (ICCV)Google Scholar
  35. 35.
    Yu Z, Yu J, Xiang C, Fan J, Tao D (2018) Beyond bilinear: generalized multi-modal factorized high-order pooling for visual question answering. IEEE Trans Neural Netw Learn Syst.  https://doi.org/10.1109/TNNLS.2018.2817340
  36. 36.
    Yue H, Yang J, Sun X, Wu F, Hou C (2017) Contrast enhancement based on intrinsic image decomposition. IEEE Trans Image Process 26(8):3981–3994MathSciNetCrossRefGoogle Scholar
  37. 37.
    Zhang Q, Shen X, Xu L, Jia J (2014) Rolling guidance filter. Proc Eur Conf Comput Vision (ECCV)Google Scholar
  38. 38.
    Zhang L, Wang M, Nie L, Hong L, Rui Y, Tian Q (2015) Retargeting semantically rich photos. IEEE Trans Multimed 17(9):1538–1549CrossRefGoogle Scholar
  39. 39.
    Zhang L, Wang M, Nie L, Hong R, Xia Y, Zimmermann R (2015) Biologically inspired media quality modeling. Proc ACM Multimed (ACM MM)Google Scholar
  40. 40.
    Zhang D, Meng D, Han J (2016) Co-saliency detection via a self-paced multiple-instance learning framework. IEEE Trans Pattern Anal Mach Intell 39(5):865–878CrossRefGoogle Scholar
  41. 41.
    Zhang H, Shang X, Luan HB, Wang M, Chua TS (2016) Learning from collective intelligence: feature learning using social images and tags. ACM Trans Multimed Comput Commun Appl 13(1):Article 1CrossRefGoogle Scholar
  42. 42.
    Zhu X, Huang Z, Cheng H, Cui J, Shen H (2013) Sparse hashing for fast multimedia search. ACM Trans Inf Syst 31(2):Article 9CrossRefGoogle Scholar
  43. 43.
    Zhu X, Li X, Zhang S (2016) Block-row sparse Multiview multilabel learning for image classification. IEEE Trans Cybernet 46(2):450–461CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.School of Computer and InformationHefei University of TechnologyHefeiChina
  2. 2.School of Computer Science and EngineeringAnhui University of Science and TechnologyHuainanChina

Personalised recommendations