Skip to main content
Log in

Infrared and visible image fusion via gradientlet filter and salience-combined map

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

In this study, we innovatively propose salience-combined map and gradientlet filter for infrared and visible image fusion. It can enhance the infrared image of the target and also retain more detailed textures. First, our method is based on a multi-scale decomposition framework and gradientlet filter to decompose the source graph into approximate layers and residual layers. The approximate layers preserve smooth areas of the source images without edge blurring. The residual layers reflect the small gradients and noise of the source image. Since the texture part of the residual layer is weak, we introduce a Gamma-enhanced gradient map to complement the texture. The initial fusion image can be obtained by fusing the approximate layers and the residual layers. The salience-combined map directly extracts salient objects from infrared images according to pixel threshold segmentation, and extracts background information other than objects from visible images. Then the salience-combined map is used to guide the initial fusion image to get the final image. In our qualitative analysis, we compared our method against 5 traditional methods and deep learning-based methods. In the quantitative assessment, utilizing 29 pairs of randomly selected source images, our algorithm distinctly showcased absolute superiority across various metrics, including EN, SF, AG, and FD. The aforementioned results affirm that our method ensures the generation of fused images with clear targets and rich details.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Algorithm 1
Fig. 3
Algorithm 2
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Data Availability

All public images used in this paper are sourced from the following datasets:

1. TNO dataset: https://doi.org/10.6084/m9.figshare.1008029.v1

2. RoadScene dataset: https://github.com/frostcza/RoadScene

References

  1. Li H, Wu X (2019) Densefuse: A fusion approach to infrared and visible images. IEEE Trans Image Process 28(5):2614–2623

    Article  MathSciNet  Google Scholar 

  2. Peter J, Edward H (1987) The laplacian pyramid as a compact image code. In: Oscar F (ed) Martin A. Readings in Computer Vision, Morgan Kaufmann, San Francisco (CA) pp, pp 671–679

    Google Scholar 

  3. Chen J, Li X, Luo L, Mei X, Ma J (2020) Infrared and visible image fusion based on target-enhanced multiscale transform decomposition. Inf Sci 508:64–78

    Article  Google Scholar 

  4. Ma J, Zhang H, Shao Z, Liang P, Xu H (2020) Ganmcc: A generative adversarial network with multiclassification constraints for infrared and visible image fusion. IEEE Trans Instrum Meas 70:1–14

    Google Scholar 

  5. Du J, Li W, Lu K, Xiao B (2016) An overview of multi-modal medical image fusion. Neurocomputing 215:3–20

    Article  Google Scholar 

  6. Fernandez B, Haut J, Paoletti M, Plaza J, Plaza A, Pla F (2018) Remote sensing image fusion using hierarchical multimodal probabilistic latent semantic analysis. IEEE J Selected Topics Applied Earth Obser Remote Sensing 11(12):4982–4993

    Article  Google Scholar 

  7. Li H, Li X, Yu Z, Mao C (2016) Multifocus image fusion by combining with mixed-order structure tensors and multiscale neighborhood. Inf Sci 349:25–49

    Article  Google Scholar 

  8. Zhou Z, Wang B, Li S, Dong M (2016) Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with gaussian and bilateral filters. Information Fusion 30:15–26

    Article  Google Scholar 

  9. Liu Y, Chen X, Cheng J, Peng H, Wang Z (2018) Infrared and visible image fusion with convolutional neural networks. Int J Wavelets Multiresolut Inf Process 16(03):1850018

    Article  MathSciNet  Google Scholar 

  10. Dogra A, Goyal B, Agrawal S (2017) From multi-scale decomposition to non-multi-scale decomposition methods: a comprehensive survey of image fusion techniques and its applications. IEEE Access 5:16040–16067

    Article  Google Scholar 

  11. Tang L, Yuan J, Ma J (2022) Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network. Information Fusion 82:28–42

    Article  Google Scholar 

  12. Zhang H, Ma J (2021) Sdnet: A versatile squeeze-and-decomposition network for real-time image fusion. Int J Comput Vision 129(10):2761–2785

    Article  Google Scholar 

  13. Ma J, Xu H, Jiang J, Mei X, Zhang X (2020) Ddcgan: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion. IEEE Trans Image Process 29:4980–4995

    Article  Google Scholar 

  14. Liu Y, Liu S, Wang Z (2015) A general framework for image fusion based on multi-scale transform and sparse representation. Information fusion 24:147–164

    Article  Google Scholar 

  15. Li S, Kang X, Hu J (2013) Image fusion with guided filtering. IEEE Trans Image Process 22(7):2864–2875

    Article  Google Scholar 

  16. Gupta M, Kumar N, Gupta N, Zaguia A (2022) Fusion of multi-modality biomedical images using deep neural networks. Soft Comput 26(16):8025–8036

    Article  Google Scholar 

  17. Bulanon DM, Burks TF, Alchanatis V (2009) Image fusion of visible and thermal images for fruit detection. Biosys Eng 103(1):12–22

    Article  Google Scholar 

  18. Chipman L, Orr T, Graham L (1995) Wavelets and image fusion. In Proceedings international conference on image processing, vol 3, IEEE pp 248–251

  19. Saeedi J, Faez K (2012) Infrared and visible image fusion using fuzzy logic and population-based optimization. Appl Soft Comput 12(3):1041–1054

    Article  Google Scholar 

  20. Zou Y, Liang X, Wang T (2013) Visible and infrared image fusion using the lifting wavelet. TELKOMNIKA Indonesian J Electr Eng 11(11):6290–6295

    Google Scholar 

  21. Yan X, Qin H, Li J, Zhou H, Zong J (2015) Infrared and visible image fusion with spectral graph wavelet transform. JOSA A 32(9):1643–1652

    Article  Google Scholar 

  22. Xu L, Du J, Zhang Z (2015) Infrared-visible video fusion based on motion-compensated wavelet transforms. IET Image Proc 9(4):318–328

    Article  Google Scholar 

  23. Do MN, Vetterli M (2005) The contourlet transform: an efficient directional multiresolution image representation. IEEE Trans Image Process 14(12):2091–2106

    Article  Google Scholar 

  24. Li H, Liu L, Huang W, Yue C (2016) An improved fusion algorithm for infrared and visible images based on multi-scale transform. Infrared Physics & Technology 74:28–37

    Article  Google Scholar 

  25. Da C, Arthur L, Zhou J, Do MN (2006) The nonsubsampled contourlet transform: theory, design, and applications. IEEE Trans Image Process 15(10):3089–3101

    Article  Google Scholar 

  26. Farbman Z, Fattal R, Lischinski D, Szeliski R (2008) Edge-preserving decompositions for multi-scale tone and detail manipulation. ACM Trans Graphics (TOG) 27(3):1–10

    Article  Google Scholar 

  27. Cui G, Feng H, Xu Z, Li Q, Chen Y (2015) Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition. Optics Communications 341:199–209

    Article  Google Scholar 

  28. He K, Sun J, Tang X (2012) Guided image filtering. IEEE Trans Pattern Anal Mach Intell 35(6):1397–1409

    Article  Google Scholar 

  29. Zhou Z, Wang B, Li S, Dong M (2016) Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with gaussian and bilateral filters. Information Fusion 30:15–26

    Article  Google Scholar 

  30. Cai X, Zhao W, Gao F (2010) Image fusion algorithm based on adaptive pulse coupled neural networks in curvelet domain. In IEEE 10th International conference on signal processing proceedings, IEEE pp 845–848

  31. Bhatnagar G, Wu Q (2012) An image fusion framework based on human visual system in framelet domain. Int J Wavelets Multiresolut Inf Process 10(01):1250002

    Article  MathSciNet  Google Scholar 

  32. Geng P, Wang Z, Zhang Z, Xiao Z (2012) Image fusion by pulse couple neural network with shearlet. Opt Eng 51(6):067005

    Article  Google Scholar 

  33. Naidu VPS (2013) Novel image fusion techniques using dct. Intern J Comput Sci Business Inform 5(1):1–18

    Google Scholar 

  34. Song Y, Xiao J, Yang J, Chai Z, Wu Y (2016) Research on mr-svd based visual and infrared image fusion. In Infrared technology and applications, and robot sensing and advanced control, vol 10157, International society for optics and photonics pp 101571C

  35. Candes E, Demanet L, Donoho D, Ying L (2006) Fast discrete curvelet transforms. Multiscale Modeling & Simulation 5(3):861–899

    Article  MathSciNet  Google Scholar 

  36. Ma J, Zhou Y (2020) Infrared and visible image fusion via gradientlet filter. Comput Vis Image Underst 197:103016

    Article  Google Scholar 

  37. Toet A (2014). Tno image fusion dataset URL. https://doi.org/10.6084/m9.figshare.1008029.v1

  38. Xu H, Ma J, Jiang J, Guo X, Ling H (2022) U2fusion: A unified unsupervised image fusion network. IEEE Trans Pattern Anal Mach Intell 44(1):502–518

    Article  Google Scholar 

  39. Nencini F, Garzelli A, Baronti S, Alparone L (2007) Remote sensing image fusion using the curvelet transform. Information fusion 8(2):143–56

    Article  Google Scholar 

  40. Ma J, Yu W, Liang P, Li C, Jiang J (2019) Fusiongan: A generative adversarial network for infrared and visible image fusion. Information Fusion 48:11–26

    Article  Google Scholar 

  41. Zhang Y, Liu Y, Sun P, Yan H, Zhao X, Zhang L (2020) Ifcnn: A general image fusion framework based on convolutional neural network. Information Fusion 54:99–118

    Article  Google Scholar 

  42. Cui G, Feng H, Xu Z, Li Q, Chen Y (2015) Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition. Optics Communications 341:199–209

    Article  Google Scholar 

  43. Wesley RJ, Jan A, Van A, Fethi BA (2008) Assessment of image fusion procedures using entropy, image quality, and multispectral lassification. J Appl Remote Sens 2(1):1–28

    Google Scholar 

  44. Eskicioglu AM, Fisher PS (1995) Image quality measures and their performance. IEEE Trans Commun 43(12):2959–2965

    Article  Google Scholar 

  45. Ma J, Ma Y, Li C (2019) Infrared and visible image fusion methods and applications: A survey. Information Fusion pp 153–178

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China nos. 62073304 and 62373338.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chen Jun.

Ethics declarations

Consent for Publication

The work described has not been published before, and its publication has been approved by the responsible authorities at the institution where the work is carried out.

Competing interests

The authors declare that there is no competing interests regarding the publication of this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jun, C., Lei, C., Wei, L. et al. Infrared and visible image fusion via gradientlet filter and salience-combined map. Multimed Tools Appl (2023). https://doi.org/10.1007/s11042-023-17778-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11042-023-17778-5

Keywords

Navigation