Skip to main content
Log in

Adaptive low light visual enhancement and high-significant target detection for infrared and visible image fusion

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Infrared and visible image fusion aim to obtain a fused image with salient targets and preserve abundant texture detail information as much as possible, which can potentially improve the reliability of some target detection and tracking tasks. However, some visible images taken from low-illumination conditions are subjected to losing many details and cannot obtain a good fusion result. To address this issue, we proposed a novel adaptive visual enhancement and high-significant targets detection-based fusion scheme in this paper. First, a bright-pass bilateral filter and adaptive-gamma correction-based algorithm are proposed to enhance the visible image adaptively. Second, an iterative guided and infrared patch-tensor-based algorithm are proposed to extract the infrared target. Third, an efficient hybrid \(\ell_{1} - \ell_{0}\) model decomposes the infrared and visible image into base and detail layers and then fuses them by weight map strategy. The final fused image is obtained by merging the fused base layers, detail layers, and infrared targets. Qualitative and quantitative experimental results demonstrate that the proposed method is superior to 9 state-of-the-art image fusion methods as more valuable texture details and significant infrared targets are preserved. Supplemental material and codes of this work are publicly available at: https://github.com/VCMHE/BI-Fusion.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Data availability

The datasets generated and analyzed during the current study are available at: https://github.com/VCMHE/BI-Fusion.

References

  1. Lu, R., Gao, F., Yang, X., Fan, J., Li, D.: A novel infrared and visible image fusion method based on multi-level saliency integration. Vis. Comput. 1, 1–15 (2022)

    Google Scholar 

  2. Wang, X., Hua, Z., Li, J.: Cross-UNet: dual-branch infrared and visible image fusion framework based on cross-convolution and attention mechanism. Vis. Comput. 1, 1–18 (2022)

    Google Scholar 

  3. Liu, J., Jiang, Z., Wu, G., Liu, R., Fan, X.: A unified image fusion framework with flexible bilevel paradigm integration. Vis. Comput. 1, 1–18 (2022)

    Google Scholar 

  4. Ma, J., Ma, Y., Li, C.: Infrared and visible image fusion methods and applications: a survey. Inf. Fusion. 45, 153–178 (2019). https://doi.org/10.1016/j.inffus.2018.02.004

    Article  Google Scholar 

  5. Zhang, X., Ye, P., Xiao, G.: VIFB: A visible and infrared image fusion benchmark. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR Workshops 2020, Seattle, WA, USA, June 14–19, 2020. pp. 468–478. Computer Vision Foundation/IEEE (2020)

  6. Jagtap, N.S., Thepade, S.D.: High-quality image multi-focus fusion to address ringing and blurring artifacts without loss of information. Vis. Comput. 1, 1–19 (2021)

    Google Scholar 

  7. Aghamaleki, J.A., Ghorbani, A.: Image fusion using dual tree discrete wavelet transform and weights optimization. Vis. Comput. (2022). https://doi.org/10.1007/s00371-021-02396-9

    Article  Google Scholar 

  8. Wang, C., He, C., Xu, M.: Fast exposure fusion of detail enhancement for brightest and darkest regions. Vis. Comput. 37, 1233–1243 (2021)

    Article  Google Scholar 

  9. He, K., Gong, J., Xie, L., Zhang, X., Xu, D.: Regions preserving edge enhancement for multisensor-based medical image fusion. IEEE Trans. Instrum. Meas. 70, 1–13 (2021). https://doi.org/10.1109/TIM.2021.3066467

    Article  Google Scholar 

  10. Chen, Y., Shi, K., Ge, Y., Zhou, Y.: Spatiotemporal remote sensing image fusion using multiscale two-stream convolutional neural networks. IEEE Trans. Geosci. Remote. Sens. 60, 1–12 (2022). https://doi.org/10.1109/TGRS.2021.3069116

    Article  Google Scholar 

  11. Luo, Y., He, K., Xu, D., Yin, W., Liu, W.: Infrared and visible image fusion based on visibility enhancement and hybrid multiscale decomposition. Optik 1, 168914 (2022)

    Article  Google Scholar 

  12. Luo, Y., He, K., Xu, D., Yin, W.: Infrared and visible image fusion based on visibility enhancement and norm optimization low-rank representation. J. Electron. Imaging 31, 013032 (2022)

    Article  Google Scholar 

  13. Yin, W., He, K., Xu, D., Luo, Y., Gong, J.: Adaptive enhanced infrared and visible image fusion using hybrid decomposition and coupled dictionary. Neural Comput. Appl. 1, 1–19 (2022)

    Google Scholar 

  14. Soroush, R., Baleghi, Y.: NIR/RGB image fusion for scene classification using deep neural networks. Vis. Comput. 1, 1–15 (2022)

    Google Scholar 

  15. Piella, G.: A general framework for multiresolution image fusion: from pixels to regions. Inf. Fusion. 4, 259–280 (2003). https://doi.org/10.1016/S1566-2535(03)00046-0

    Article  Google Scholar 

  16. Jin, X., Jiang, Q., Yao, S., Zhou, D., Nie, R., Hai, J., He, K.: A survey of infrared and visual image fusion methods. Infrar. Phys. Technol. 85, 478–501 (2017)

    Article  Google Scholar 

  17. Chen, J., Li, X., Luo, L., Mei, X., Ma, J.: Infrared and visible image fusion based on target-enhanced multiscale transform decomposition. Inf. Sci. 508, 64–78 (2020). https://doi.org/10.1016/j.ins.2019.08.066

    Article  Google Scholar 

  18. Zhan, L., Zhuang, Y., Huang, L.: Infrared and visible images fusion method based on discrete wavelet transform. J. Comput. 28, 57–71 (2017)

    Google Scholar 

  19. Bavirisetti, D.P., Xiao, G., Zhao, J., Dhuli, R., Liu, G.: Multi-scale guided image and video fusion: a fast and efficient approach. Circuits Syst. Signal Process. 38, 5576–5605 (2019). https://doi.org/10.1007/s00034-019-01131-z

    Article  Google Scholar 

  20. Bavirisetti, D.P., Dhuli, R.: Fusion of infrared and visible sensor images based on anisotropic diffusion and Karhunen-Loeve transform. IEEE Sens. J. 16, 203–209 (2015)

    Article  Google Scholar 

  21. Bavirisetti, D.P., Xiao, G., Liu, G.: Multi-sensor image fusion based on fourth order partial differential equations. In: 20th International Conference on Information Fusion, FUSION 2017, Xi’an, China, July 10–13, 2017, pp. 1–9. IEEE (2017)

  22. Bashir, R., Junejo, R., Qadri, N.N., Fleury, M., Qadri, M.Y.: SWT and PCA image fusion methods for multi-modal imagery. Multim. Tools Appl. 78, 1235–1263 (2019). https://doi.org/10.1007/s11042-018-6229-5

    Article  Google Scholar 

  23. Mitianoudis, N., Stathaki, T.: Pixel-based and region-based image fusion schemes using ICA bases. Inf. Fusion. 8, 131–142 (2007). https://doi.org/10.1016/j.inffus.2005.09.001

    Article  MATH  Google Scholar 

  24. Li, G., Lin, Y., Qu, X.: An infrared and visible image fusion method based on multi-scale transformation and norm optimization. Inf. Fusion. 71, 109–129 (2021)

    Article  Google Scholar 

  25. Yin, W., He, K., Xu, D., Luo, Y., Gong, J.: Significant target analysis and detail preserving based infrared and visible image fusion. Infrar. Phys. Technol. 104041 (2022)

  26. Kumar, B.S.: Image fusion based on pixel significance using cross bilateral filter. SIViP 9, 1193–1204 (2015)

    Article  Google Scholar 

  27. Li, S., Kang, X., Hu, J.: Image fusion with guided filtering. IEEE Trans. Image Process. 22, 2864–2875 (2013)

    Article  Google Scholar 

  28. Zhao, Z., Xu, S., Zhang, C., Liu, J., Zhang, J.: Bayesian fusion for infrared and visible images. Signal Process. 177, 107734 (2020). https://doi.org/10.1016/j.sigpro.2020.107734

    Article  Google Scholar 

  29. Panigrahy, C., Seal, A., Mahato, N.K.: Parameter adaptive unit-linking dual-channel PCNN based infrared and visible image fusion. Neurocomputing 514, 21–38 (2022)

    Article  Google Scholar 

  30. Li, H., Wu, X.-J., Kittler, J.: RFN-Nest: An end-to-end residual fusion network for infrared and visible images. Inf. Fusion. 73, 72–86 (2021). https://doi.org/10.1016/j.inffus.2021.02.023

    Article  Google Scholar 

  31. Ma, J., Yu, W., Liang, P., Li, C., Jiang, J.: FusionGAN: a generative adversarial network for infrared and visible image fusion. Inf. Fusion. 48, 11–26 (2019). https://doi.org/10.1016/j.inffus.2018.09.004

    Article  Google Scholar 

  32. Li, Q., Lu, L., Li, Z., Wu, W., Liu, Z., Jeon, G., Yang, X.: Coupled GAN with relativistic discriminators for infrared and visible images fusion. IEEE Sens. J. 21, 7458–7467 (2019)

    Article  Google Scholar 

  33. Tan, Z., Gao, M., Li, X., Jiang, L.: A flexible reference-insensitive spatiotemporal fusion model for remote sensing images using conditional generative adversarial network. IEEE Trans. Geosci. Remote Sens. 60, 1–13 (2021)

    Google Scholar 

  34. Fu, Y., Wu, X.-J., Durrani, T.S.: Image fusion based on generative adversarial network consistent with perception. Inf. Fusion. 72, 110–125 (2021). https://doi.org/10.1016/j.inffus.2021.02.019

    Article  Google Scholar 

  35. Maurya, L., Mahapatra, P.K., Kumar, A.: A social spider optimized image fusion approach for contrast enhancement and brightness preservation. Appl. Soft Comput. 52, 575–592 (2017). https://doi.org/10.1016/j.asoc.2016.10.012

    Article  Google Scholar 

  36. Xu, J., Hou, Y., Ren, D., Liu, L., Zhu, F., Yu, M., Wang, H., Shao, L.: STAR: A Structure and Texture Aware Retinex Model. IEEE Trans. Image Process. 29, 5022–5037 (2020). https://doi.org/10.1109/TIP.2020.2974060

    Article  MATH  Google Scholar 

  37. Tomasi, C., Manduchi, R.: Bilateral Filtering for Gray and Color Images. In: Proceedings of the Sixth International Conference on Computer Vision (ICCV-98), Bombay, India, January 4–7, 1998. pp. 839–846. IEEE Computer Society (1998)

  38. Ghosh, S., Chaudhury, K.N.: Fast Bright-Pass Bilateral Filtering for Low-Light Enhancement. In: 2019 IEEE International Conference on Image Processing, ICIP 2019, Taipei, Taiwan, September 22–25, 2019. pp. 205–209. IEEE (2019)

  39. Kong, X., Yang, C., Cao, S., Li, C., Peng, Z.: Infrared small target detection via nonconvex tensor fibered rank approximation. IEEE Trans. Geosci. Remote Sens. 60, 1–21 (2021)

    Google Scholar 

  40. Chen, Y., Li, J., Zhou, Y.: Hyperspectral image denoising by total variation-regularized bilinear factorization. Signal Process. 174, 107645 (2020)

    Article  Google Scholar 

  41. Zheng, Y.-B., Huang, T.-Z., Zhao, X.-L., Jiang, T.-X., Ma, T.-H., Ji, T.-Y.: Mixed noise removal in hyperspectral image via low-fibered-rank regularization. IEEE Trans. Geosci. Remote Sens. 58, 734–749 (2019)

    Article  Google Scholar 

  42. Eckstein, J., Bertsekas, D.P.: On the Douglas—Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55, 293–318 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  43. Dehaene, S.: The neural basis of the Weber-Fechner law: a logarithmic mental number line. Trends Cogn. Sci. 7, 145–147 (2003)

    Article  Google Scholar 

  44. Ying, Z., Li, G., Ren, Y., Wang, R., Wang, W.: A New Image Contrast Enhancement Algorithm Using Exposure Fusion Framework. In: Felsberg, M., Heyden, A., and Krüger, N. (eds.) Computer Analysis of Images and Patterns - 17th International Conference, CAIP 2017, Ystad, Sweden, August 22–24, 2017, Proceedings, Part II. pp. 36–46. Springer (2017)

  45. Sheikh, H.R., Bovik, A.C.: Image information and visual quality. IEEE Trans. Image Process. 15, 430–444 (2006). https://doi.org/10.1109/TIP.2005.859378

    Article  Google Scholar 

  46. Liu, Y., Chen, X., Peng, H., Wang, Z.: Multi-focus image fusion with a deep convolutional neural network. Inf. Fus. 36, 191–207 (2017)

    Article  Google Scholar 

  47. He, K., Sun, J., Tang, X.: Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 35, 1397–1409 (2012)

    Article  Google Scholar 

  48. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13, 600–612 (2004)

    Article  Google Scholar 

  49. Qin, X., Zhang, Z.V., Huang, C., Gao, C., Dehghan, M., Jägersand, M.: BASNet: Boundary-Aware Salient Object Detection. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16–20, 2019. pp. 7479–7489. Computer Vision Foundation/IEEE (2019)

  50. Liang, Z., Xu, J., Zhang, D., Cao, Z., Zhang, L.: A Hybrid l1-l0 Layer Decomposition Model for Tone Mapping. In: 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18–22, 2018. pp. 4758–4766. Computer Vision Foundation / IEEE Computer Society (2018)

  51. Liu, Y., Liu, S., Wang, Z.: A general framework for image fusion based on multi-scale transform and sparse representation. Inf. Fus. 24, 147–164 (2015). https://doi.org/10.1016/j.inffus.2014.09.004

    Article  Google Scholar 

  52. Ma, J., Zhou, Z., Wang, B., Zong, H.: Infrared and visible image fusion based on visual saliency map and weighted least square optimization. Infrar. Phys. Technol. 82, 8–17 (2017)

    Article  Google Scholar 

  53. Zhou, Z., Wang, B., Li, S., Dong, M.: Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters. Inf. Fusion. 30, 15–26 (2016). https://doi.org/10.1016/j.inffus.2015.11.003

    Article  Google Scholar 

  54. Xydeas, C., Petrovic, V.: Objective image fusion performance measure. Electron. Lett. 36, 308–309 (2000)

    Article  Google Scholar 

  55. Chen, Y., Blum, R.S.: A new automated quality assessment algorithm for image fusion. Image Vis. Comput. 27, 1421–1432 (2009). https://doi.org/10.1016/j.imavis.2007.12.002

    Article  Google Scholar 

  56. Chen, H., Varshney, P.K.: A human perception inspired quality metric for image fusion based on regional information. Inf. Fusion. 8, 193–207 (2007). https://doi.org/10.1016/j.inffus.2005.10.001

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported in part by the provincial major science and technology special plan projects under Grant 202202AD080003, in part by the National Natural Science Foundation of China under Grant 62202416, Grant 62162068, Grant 62172354, Grant 61761049, in part by the Yunnan Province Ten Thousand Talents Program and Yunling Scholars Special Project under Grant YNWR-YLXZ-2018-022, in part by the Yunnan Provincial Science and Technology Department-Yunnan University “Double First Class” Construction Joint Fund Project under Grant No. 2019FY003012, in part by the Science Research Fund Project of Yunnan Provincial Department of Education under grant 2021Y027, in part by the Graduate Research and Innovation Foundation of Yunnan University (No.2021Y176 and No. 2021Y272)

Author information

Authors and Affiliations

Authors

Contributions

WY contributed to Conceptualization, Methodology, Software, Writing – original draft. KH contributed to Supervision, Writing – review editing, Project administration, Funding acquisition. DX contributed Supervision, Project administration, Funding acquisition. YY contributed to Visualization, Formal analysis. YL contributed to Validation, Data curation.

Corresponding authors

Correspondence to Kangjian He or Dan Xu.

Ethics declarations

Conflict of interest

The authors declare that there is no conflict of interest regarding the publication of the article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yin, W., He, K., Xu, D. et al. Adaptive low light visual enhancement and high-significant target detection for infrared and visible image fusion. Vis Comput 39, 6723–6742 (2023). https://doi.org/10.1007/s00371-022-02759-w

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-022-02759-w

Keywords

Navigation