Skip to main content

Advertisement

Log in

Fully convolutional network-based infrared and visible image fusion

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

This study proposes a novel fusion framework for infrared and visual images based on a full convolutional network (FCN) in the local non-subsampled shearlet transform (LNSST) domain. First, the LNSST is used as a multi-scale analysis tool to decompose the source images into low-frequency and high-frequency sub-images. Second, the coefficients of the high-frequency sub-images are fed into the FCN to obtain the weight map, and then the average gradient (AVG) is used as the fusion rule to fuse the high-frequency sub-images while the low-frequency coefficients are fused by local energy fusion strategy. Finally, the inverse of the LNSST is applied to obtain the final fused image. The experimental results showed that the proposed fusion framework performed better than other typical fusion methods in both visual quality and objective assessment.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Bai X (2016) Morphological center operator based infrared and visible image fusion through correlation coefficient[J]. Infrared Phys Technol 76:546–554

    Article  Google Scholar 

  2. Bavirisetti DP, Dhuli R (2015) Fusion of infrared and visible sensor images based on anisotropic diffusion and Karhunen-Loeve transform[J]. IEEE Sensors J 16 (1):203–209

    Article  Google Scholar 

  3. Bottou L (2010) Large-scale machine learning with stochastic gradient descent[M]. In: Proceedings of COMPSTAT’2010. Physica-Verlag HD, pp 177–186

  4. Cheng B, Jin L, Li G (2018) Infrared and visual image fusion using LNSST and an adaptive dual-channel PCNN with triple-linking strength[J]. Neurocomputing 310:135–147

    Article  Google Scholar 

  5. Glorot X, Bengio Y (2010) Understanding the difficulty of training deep feedforward neural networks[C]. In: Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp 249–256

  6. Han Y, Cai Y, Cao Y et al (2013) A new image fusion performance metric based on visual information fidelity[J]. Inform Fus 14(2):127–135

    Article  Google Scholar 

  7. Hossny M, Nahavandi S, Creighton D et al (2010) Image fusion performance metric based on mutual information and entropy driven quadtree decomposition[J]. Electron Lett 46(18):1266–1268

    Article  Google Scholar 

  8. Jiang Z-t, WU H, Zhou X-l (2018) Infrared and visible image fusion algorithm based on improved guided filtering and dual-channel spiking cortical model[J]. Acta Opt Sin 38(2):112–120

    Google Scholar 

  9. Jin-xi L, Ding-fu Z, Sheng Y et al (2018) Modified image fusion technique to remove defocus noise in optical scanning holography[J]. Opt Commun 407:234–238

    Article  Google Scholar 

  10. Kumar BKS (2015) Image fusion based on pixel significance using cross bilateral filter[J]. Signal Image Vid Process 9(5):1193–1204

    Article  Google Scholar 

  11. Lewis JJ, O’Callaghan RJ, Nikolov SG et al (2007) Pixel-and region-based image fusion with complex wavelets[J]. Inform Fus 8(2):119–130

    Article  Google Scholar 

  12. Li S, Kang X, Hu J (2013) Image fusion with guided filtering[J]. IEEE Trans Image Process 22(7):2864–2875

    Article  Google Scholar 

  13. Li H, Wu XJ, Kittler J (2018) Infrared and visible image fusion using a deep learning framework[C]. In: 2018 24th International conference on pattern recognition (ICPR). IEEE, pp 2705–2710

  14. Li J, Song M, Peng Y (2018) Infrared and visible image fusion based on robust principal component analysis and compressed sensing[J]. Infrared Phys Technol 89:129–139

    Article  Google Scholar 

  15. Liu Y, Chen X, Ward RK et al (2016) Image fusion with convolutional sparse representation[J]. IEEE Signal Process Lett 23(12):1882–1886

    Article  Google Scholar 

  16. Liu Y, Chen X, Peng H et al (2017) Multi-focus image fusion with a deep convolutional neural network[J]. Inform Fus 36:191–207

    Article  Google Scholar 

  17. Liu Y, Chen X, Cheng J et al (2018) Infrared and visible image fusion with convolutional neural networks[J]. Int J Wavelets Multiresol Inform Process 16 (03):1850018

    Article  MathSciNet  Google Scholar 

  18. Lu H, Li Y, Mu S et al (2017) Motor anomaly detection for unmanned aerial vehicles using reinforcement learning[J]. IEEE Internet Things J 5(4):2315–2322

    Article  Google Scholar 

  19. Lu H, Li Y, Chen M et al (2018) Brain intelligence: go beyond artificial intelligence[J]. Mob Netw Appl 23(2):368–375

    Article  Google Scholar 

  20. Lu H, Li Y, Uemura T et al (2018) Low illumination underwater light field images reconstruction using deep convolutional neural networks[J]. Futur Gener Comput Syst 82:142–148

    Article  Google Scholar 

  21. Lu H, Wang D, Li Y et al (2019) CONet: a cognitive ocean network[J]. arXiv:1901.06253

  22. Ma J, Chen C, Li C et al (2016) Infrared and visible image fusion via gradient transfer and total variation minimization[J]. Inform Fus 31:100–109

    Article  Google Scholar 

  23. Ma Y, Chen J, Chen C et al (2016) Infrared and visible image fusion using total variation model[J]. Neurocomputing 202:12–19

    Article  Google Scholar 

  24. Ma J, Zhou Z, Wang B et al (2017) Infrared and visible image fusion based on visual saliency map and weighted least square optimiza-tion[J]. Infrared Phys Technol 82:8–17

    Article  Google Scholar 

  25. Serikawa S, Lu H (2014) Underwater image dehazing using joint trilateral filter[J]. Comput Electr Eng 40(1):41–50

    Article  Google Scholar 

  26. Toet A (1989) A morphological pyramidal image decomposition[J]. Pattern Recogn Lett 9(4):255–261

    Article  Google Scholar 

  27. Wang W, Chang F (2011) A multi-focus image fusion method based on Laplacian pyramid[J]. JCP 6(12):2559–2566

    Google Scholar 

  28. Xiang Y, Jun M (2018) Image fusion method based on entropy rate segmentation and multi-scale decomposition[J]. Laser Optoelectron Progress 1:33

    Google Scholar 

  29. Xiang T, Yan L, Gao R (2015) A fusion algorithm for infrared and visible images based on adaptive dual-channel unit-linking PCNN in NSCT domain[J]. Infrared Phys Technol 69:53–61

    Article  Google Scholar 

  30. Xydeas CS, Petrovic V (2000) Objective image fusion performance measure[J]. Electron Lett 36(4):308–309

    Article  Google Scholar 

  31. Yan X, Qin H, Li J et al (2015) Infrared and visible image fusion with spectral graph wavelet transform[J]. JOSA A 32(9):1643–1652

    Article  Google Scholar 

  32. Zhang Q, Guo B (2009) Multifocus image fusion using the nonsubsampled contourlet transform[J]. Signal Process 89(7):1334–1346

    Article  Google Scholar 

  33. Zhang B, Lu X, Pei H et al (2015) A fusion algorithm for infrared and visible images based on saliency analysis and non-subsampled Shearlet transform[J]. Infrared Phys Technol 73:286–297

    Article  Google Scholar 

  34. Zhao C, Shao G, Ma L et al (2014) Image fusion algorithm based on redundant-lifting NSWMDA and adaptive PCNN[J]. Optik-Int J Light Electron Opt 125(20):6247–6255

    Article  Google Scholar 

  35. Zhao W, Lu H, Wang D (2017) Multisensor image fusion and enhancement in spectral total variation domain[J]. IEEE Trans Multimed 20(4):866–879

    Article  Google Scholar 

  36. Zhou Z, Wang B, Li S et al (2016) Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters[J]. Inform Fus 30:15–26

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hong Yin.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Feng, Y., Lu, H., Bai, J. et al. Fully convolutional network-based infrared and visible image fusion. Multimed Tools Appl 79, 15001–15014 (2020). https://doi.org/10.1007/s11042-019-08579-w

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-019-08579-w

Keywords

Navigation