Skip to main content


Log in

A survey of multi-source image fusion

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript


Multi-source image fusion has become an important and useful new technology in the image understanding and computer vision fields. The purpose of multi-source image fusion is to intelligently synthesize image data from multiple information sources, to generate more accurate and reliable descriptions and judgments than single-sensor data and to make fused images more consistent with human and machine visual features. Although there are many studies on multi-source image fusion, few papers summarize both theoretical and experimental aspects. This paper reviews, classifies and discusses the more advanced multi-source image fusion methods. We comprehensively introduce existing image fusion evaluation methods and compare them based on different standards. The representative algorithms are evaluated by using 12 famous target fusion metrics, and the advantages and disadvantages of each type are discussed in detail. Through research, the challenges encountered in this field and possible future research directions and development prospects are discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Data Availability

The datasets analysed during the current study are, and


  1. Zhang XC (2021) Deep Learning-based Multi-focus Image Fusion: A Survey and A Comparative Study. IEEE Trans on Pattern Analysis and Machine Intelligence.

    Article  Google Scholar 

  2. Shao ZF, Cai JJ (2018) Remote Sensing Image Fusion With Deep Convolutional Neural Network. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 1:1656–1669.

    Article  Google Scholar 

  3. Yin M, Liu XN, Liu Y (2019) Medical Image Fusion With Parameter-Adaptive Pulse Coupled Neural Network in Nonsubsampled Shearlet Transform Domain. IEEE Trans on instrumentation and measurement 68(1):49–64.

    Article  Google Scholar 

  4. Kanika B, Deepika K, Bhisham S, Yu-Chen Hu and Atef Z (2022) A fuzzy convolutional neural network for enhancing multi-focus image fusion. Visual Communication and Image Representation 84.

  5. Ma J, Ma Y and Li C (2019) Infrared and visible image fusion methods and applications: A survey.Information Fusion, 45:53–178.

  6. Deng X, Zhang YT, Xu M, Gu SH, DuanYP, (2021) Deep Coupled Feedback Network for Joint Exposure Fusion and Image Super-Resolution. IEEE Trans on Image Processing 30:3098–3112.

  7. Stathaki T (2008) Image Fusion: Algorithms and Applications. Academic Press

  8. Liu Y, Wang Z (2015) Dense sift for ghost-free multi-exposure fusion. Journal of Visual Communication and Image Representation. 31:208–224.

    Article  Google Scholar 

  9. Liu W, Wang Z (2020) A novel multi-focus image fusion method using multiscale shearing non-local guided averaging filter. Signal Processing 166:107252.

    Article  Google Scholar 

  10. Amin-Naji M, Aghagolzadeh A (2018) Multi-focus image fusion in DCT domain using variance and energy of Laplacian and correlation coefficient for visual sensor networks. Journal of AI and Data Mining 6(2):233–250.

    Article  Google Scholar 

  11. Liu Y, Wang Z (2013) Multi-focus image fusion based on wavelet transform and adaptive block. Journal of Image and Graphics 18(11):1435–1444

    Google Scholar 

  12. Bavirisetti D and Dhuli R (2018) Multi-focus image fusion using multiscale image decomposition and saliency detection.Ain Shams Engineering Journal, 9 (4):1103–1117.

  13. Liu Y, Liu S and Wang Z (2015)A general framework for image fusion based on multi-scale transform and sparse representation. Information Fusion, 24:147–164.

  14. Zhou Z, Li S, Wang B (2014) Multi-scale weighted gradient based fusion for multi-focus images. Information Fusion 20:60–72.

    Article  Google Scholar 

  15. Zhang Y, Bai X, andWang T, (2017) Boundary finding based multi-focus image fusion through multi-scale morphological focus-measure. Information fusion 35:81–101.

  16. Tian J, Chen L, Ma L, Yu W (2011) Multi-focus image fusion using a bilateral gradient-based sharpness criterion. Optics communications 284(1):80–87.

    Article  Google Scholar 

  17. Shreyamsha Kumar BK (2015) Image fusion based on pixel significance using cross bilateral filter. Signal, Image and Video Processing 9(5):1193–1204

    Article  Google Scholar 

  18. Liu Y, Liu SP, Wang ZF (2015) A general framework for image fusion based on multi-scale transform and sparse representation. Information Fusion 24(1):147–164.

    Article  Google Scholar 

  19. Li S, Kang X, Hu J (2013) Image fusion with guided filtering. IEEE Trans on Image Processing 22(7):2864–2875.

    Article  Google Scholar 

  20. Li S, Kang X, Hu J, Yang B (2013) Image matting for fusion of multi-focus images in dynamic scenes. Information Fusion 14(2):147–162.

    Article  Google Scholar 

  21. Amin-Naji M, Aghagolzadeh A, Ezoji M (2019) Ensemble of CNN for Multi-Focus Image Fusion. Information Fusion 51:201–214.

    Article  Google Scholar 

  22. Xu H, Fan F, Zhang H, Le Z, Huang J (2020) A deep model for multi-focus image fusion based on gradients and connected regions. IEEE Access 8:316–327.

    Article  Google Scholar 

  23. Lai R, Li Y, Guan J, Xiong A (2019) Multi-scale visual attention deep convolutional neural network for multi-focus image fusion. IEEE Access 7(114):385–399.

    Article  Google Scholar 

  24. Zhang H, Le Z, Shao Z, Xu H, Ma J (2021) MFF-GAN: An unsupervised generative adversarial network with adaptive and gradient joint constraints for multi-focus image fusion. Information Fusion 66:40–53.

    Article  Google Scholar 

  25. Song X and Wu XJ (2019) Multi-focus Image Fusion with PCA Filters of PCANet. in IAPR Workshop on Multimodal Pattern Recognition of Social Signals in Human-Computer Interaction. Springer,1–17

  26. Wang Q, Chen W, Wu X, Li Z (2019) Detail-enhanced multi-scale exposure fusion in yuv color space. IEEE Trans. Circuits Syst. Video Technol. 26(3):1243–1252.

    Article  Google Scholar 

  27. Yang Y, Cao W, Wu S, Li Z (2018) Multi-scale fusion of two large-exposure-ratio images. IEEE Signal Process. Lett. 25(12):1885–1889.

    Article  Google Scholar 

  28. Liu Y and Wang Z (2014) Simultaneous image fusion and denoising with adaptive sparse representation. Image Process. Iet 9, 347–357.

  29. Li H, Ma K, Yong H, Zhang L (2020) Fast multi-scale structural patch decomposition for multi-exposure image fusion. IEEE Trans. Image Process. 29:5805–5816.

    Article  MathSciNet  Google Scholar 

  30. Lee SH, Park JS and Cho NI (2018) A multi-exposure image fusion based on the adaptive weights reflecting the relative pixel intensity and global gradient// Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), IEEE:1737–1741.

  31. Hayat N, Imran M (2019) Ghost-free multi exposure image fusion technique using dense sift descriptor and guided filter. Journal of Visual Communication and Image Representation. 62:295–308.

    Article  Google Scholar 

  32. Zhang H, Xu H, Xiao Y, Guo X and Ma J (2020) Rethinking the image fusion: a fast unified image fusion network based on proportional maintenance of gradient and intensity, in: Proceedings of the AAAI Conference on Artificial Intelligence.12797–12804

  33. Prabhakar KR, Srikar VS and Babu RV (2017) DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs//Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV).IEEE Computer Society, 4724–4732.

  34. Li H and Zhang L (2018) Multi-exposure Fusion with CNN Features. in: 2018 25th IEEE International Conference on Image Processing. 1723–1727.

  35. Xu H, Ma J, Zhang XP (2020) MEF-GAN: multi-exposure image fusion via generative adversarial networks. IEEE Trans. Image Process. 29:7203–7216.

    Article  Google Scholar 

  36. Ma K, Duanmu Z, Zhu H, Fang Y, Wang Z (2020) Deep guided learning for fast multi-exposure image fusion. IEEE Trans. Image Process. 29:2808–2819.

    Article  Google Scholar 

  37. Zhang X (2021) Benchmarking and comparing multi-exposure image fusion algorithms. Information Fusion 74:111–131.

    Article  Google Scholar 

  38. Ma K, Li H, Yong H, Wang Z, Meng D, Zhang L (2017) Robust multi-exposure image fusion: A structural patch decomposition approach. IEEE Trans. Image Process. 26(5):2519–2532.

    Article  MathSciNet  Google Scholar 

  39. Bavirisetti D, Dhuli R (2016) Fusion of infrared and visible sensor images based on anisotropic diffusion and karhunen loeve transform. IEEE Sensors Journal 16(1):203–209.

    Article  Google Scholar 

  40. Bavirisetti D, Xiao G and Liu G (2017) Multi-sensor image fusion based on fourth order partial differential equations. 20th International Conference on Information Fusion (Fusion), IEEE, 1–9.

  41. Zhou Z, Dong M, Xie X, Gao Z (2016) Fusion of infrared and visible images for night-vision context enhancement. Applied optics 55(23):6480–6490.

    Article  Google Scholar 

  42. Ma J, Chen C, Li C, andHuang J, (2016) Infrared and visible image fusion via gradient transfer and total variation mini-mization. Information Fusion 31:100–109.

  43. Zhou Z, Wang B, Li S, Dong M (2016) Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with gaussian and bilateral filters. Information Fusion 30:15–26.

    Article  Google Scholar 

  44. Li H and Wu X (2018) Infrared and visible image fusion using latent low-rank representation. arXiv preprintarXiv:1804.08992.

  45. Naidu V (2011) Image fusion technique using multi-resolution singular value decomposition. Defence Science Journal 61(5):479–484.

    Article  MathSciNet  Google Scholar 

  46. Bavirisetti DP, Dhuli R (2016) Two-scale image fusion of visible and infrared images using saliency detection. Infrared Physics & Technology 76:52–64.

    Article  Google Scholar 

  47. Li H, Wu X, Durrani TS (2019) Infrared and visible image fusion with resnet and zero-phase component analysis. Infrared Physics & Technology 102:103039.

    Article  Google Scholar 

  48. Guo Z, Li X, Huang H, Guo N and Li Q (2018) Medical image segmentation based on multi-modal convolutional neural network: Study on image fusion schemes// Proceedings of 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018).Washington DC, USA: IEEE:903–907.

  49. Wang L, Chang CH, Hao BL and Liu CX (2020) Multi-modal Medical Image Fusion Based on GAN and the Shift-Invariant Shearlet Transform. Proceedings of the 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Seoul, Korea (South).

  50. Zhang Y, Zhang L, Bai X, Zhang L (2017) Infrared and visual image fusion through infrared feature extraction and visual information preservation. Infrared Physics & Technology 83:227–237.

    Article  Google Scholar 

  51. Duan J, Chen L and Chen C (2018) Multifocus image fusion with enhanced linear spectral clustering and fast depth map estimation.Neurocomputing, 318:43–54.

  52. Paul S, Sevcenco IS, Agathoklis P (2016) Multi-exposureand multi-focus image fusion in gradient domain. Journal of Circuits, Systems and Computers 25(10):1650123.

    Article  Google Scholar 

  53. Qiu X, Li M, Zhang L, Yuan X (2019) Guided filter-based multi-focus image fusion through focus region detection. Signal Processing: Image Communication 72:35–46.

    Article  Google Scholar 

  54. Xu H, Ma J, Le Z, Jiang J and Guo X (2020) FusionDN: A unified densely connected network for image fusion//Proceedings of the AAAI Conference on Artificial Intelligence, 34(07):12484–12491.

  55. Li H. Wu X and Kittler J (2018) Infrared and visible image fusion using a deep learning framework. 24th International Conference on Pattern Recognition.

  56. Zhang Y, Liu Y, Sun P, Yan H, Zhao X and Zhang L (2020) IFCNN: A general image fusion framework based on convolutional neural network.Information Fusion, 54:99–118.

  57. Liu Y, Liu S, Wang Z (2015) A general framework for image fusion based on multi-scale transform and sparse representation. Information Fusion. 24(1):147–164.

    Article  Google Scholar 

  58. Li S, Kang X, Fang L, Hu v and Yin H, (2017) Pixel-level image fusion: A survey of the state of the art. Information Fusion 33:100–112.

  59. Burt PJ, Adelson EH (1983) The laplacian pyramid as a compact image code. IEEE Trans on Communications 31(4):532–540.

    Article  Google Scholar 

  60. Li MJ, Dong YB, Wang XL (2014) Image Fusion Algorithm Based on Gradient Pyramid and its Performance Evaluation. Applied Mechanics and Materials 525:715–718.

    Article  Google Scholar 

  61. Toet A (1989) Image fusion by a ratio of low-pass pyramid.Pattern Recognition Letters, 9(4):245–253.

  62. Yan X (2018) Research on Algorithm for Multi-source Image Fusion. XIDIAN University

  63. Hala A, Mohammed E, Eman E, Mohammed E, Ahmed A (2015) Multi-resolution MRI Brain Image Segmentation Based on Morphological Pyramid and Fuzzy C-mean Clustering. Arabian Journal for Science and Engineering 40(11):3173–3185.

    Article  Google Scholar 

  64. Karakaya D, Ulucan O and Turkan M (2021) PAS-MEF: Multi-exposure image fusion based on principal component analysis, adaptive well-exposedness and saliency map. IEEE:1515–1526.

  65. Muthiah MA, Logashamugam E and Reddy B (2020) Fusion of MRI and PET Images Using Deep Learning Neural Networks// Proceedings of the 2019 2nd International Conference on Power and Embedded Drive Control (ICPEDC).Chennai, India, IEEE:175–179.

  66. Pajares G and Cruz J (2004) A wavelet-based image fusion tutorial.Pattern Recognition, 37(9):1855–1872.

  67. Li H, Manjunath B and Mitra S (1995) Multisensor image fusion using the wavelet transform.Graphical Models and Image Processing, 57(3):235–245.

  68. Hill P, Canagarajah N and Bull D (2002) Image fusion using complex wavelets// Proceedings of the British Machine Vision Conference 2002 (BMVC).Bristol, UK:487–496.

  69. Hammond DK, Vandergheynst P and Gribonval R (2011) Wavelets on graphs via spectral graph theory.Applied and Computational Harmonic Analysis, 30(2): 129–150.

  70. Ahmed ST, Sankar S (2020) Investigative protocol design of layer optimized image compression in telemedicine environment. Procedia Computer Science 167(2020):2617–2622.

    Article  Google Scholar 

  71. Aymaz S and Kose C (2019) A novel image decomposition-based hybrid technique with super-resolution method for multi-focus image fusion.Information Fusion, 45:113–127.

  72. Liu Y, Liu SP, Wang ZF (2014) Medical Image Fusion by Combining Nonsubsampled Contourlet Transform and Sparse Representation. ChineseConference Pattern Recognition 372–381.

  73. Gao C and Li W (2021) Multi-scale PIIFD for Registration of Multi-source Remote Sensing Images.JOURNAL OF BEIJING INSTITUTE OF TECHNOLOGY, 30(2):113–124.

  74. Ma J, Zhou Z , Bo W and Dong M (2017) Multi-focus image fusion based on multi-scale focus measures and generalized random walk. 2017 36th Chinese Control Conference (CCC). 2017:26–28.

  75. Luo X, Li X , Wang P , Qi S, Guan J and Zhang Z (2018) Infrared and visible image fusion based on NSCT and stacked sparse autoencoders.Multimedia Tools & Applications, 77:22407–22431.

  76. Dong Z, Lai C, Qi D, Xu Z, Li C and Duan S (2018) A general memristor-based pulse coupled neural network with variable linking coefficient for multi-focus image fusion.Neurocomputing, 308:172–183.

  77. Zhang Y, Wei W, Yuan Y (2019) Multi-focus image fusion with alternating guided filtering. Signal. Image and Video Processing 13(4):727–735.

    Article  Google Scholar 

  78. Toet A (2016) Alternating guided image filtering.PeerJ Computer Science, 2(e72).

  79. Wang Z, Chen L, Li J and Zhu Y (2019) Multi-focus image fusion with random walks and guided filters.Multimedia Systems, 25:323–335.

  80. Stimpel B, Syben C, Schirrmacher F, Hoelter P, Dórfler A, Maier A (2020) Multi-modal Deep Guided Filtering for Comprehensible Medical Image Processing. IEEE Trans on Medical Imaging 39(5):1703–1711.

  81. Olshausen BA and Field DJ (1996) Emergence of simple-cell receptive field properties by learning a sparse code for natural images.Nature, 381 (6583):607–609.

  82. Yang B and Li S (2010) Multifocus image fusion and restoration with sparse representation.IEEE Trans on Instrumentation and Measurement, 59 (4):884–892.

  83. Yang B and Li S (2012) Pixel-level image fusion with simultaneous orthogonal matching pursuit.Information Fusion, 13 (1):10–19.

  84. Qiu CH, Wang YY, Zhang H, Xia SR (2017) Image fusion of CT and MR with Sparse Representation in NSST Domain. Computational and Mathematical Methods in Medicine 1–13.

  85. Piella G (2009) Image fusion for enhanced visualization: A variational approach. International Journal of Computer Vision 83(1):1–11.

    Article  Google Scholar 

  86. Manu CS and Jiji CV (2015) A novel remote sensing image fusion algorithm using ICA bases// Proceedings of the 2015 Eighth International Conference on Advances in Pattern Recognition (ICAPR).Kolkata, India, IEEE.

  87. Zhang CY, Luo XQ, Zhang ZC, Gao Ruichao, Xiaojun Wu (2015) Multi-focus Image Fusion Method Using Higher Order Singular Value Decomposition and Fuzzy Reasoning. Journal of Algorithms & Computational Technology 9(3):303–321.

    Article  MathSciNet  Google Scholar 

  88. Phamila Y and Amutha R (2014) Discrete cosine transform based fusion of multi-focus images for visual sensor networks.Signal Processing, 95:161–170.

  89. Zhao C, Wang T and Lei B (2020) Medical image fusion method based on dense block and deep convolutional generative adversarial network. Neural Computing and Applications, (5):1–16.

  90. Li S, Kwok J and Wang Y (2001) Combination of images with diverse focuses using the spatial frequency.Information Fusion, 2 (3):169–176.

  91. Chaudhary V and Kumar V (2018) Block-based image fusion using multi-scale analysis to enhance depth of field and dynamic range.Signal Image and Video Processing.12:271–279.

  92. Li S and Yang B (2008) Multifocus image fusion using region segmentation and spatial frequency.Image and Vision Computing, 26 (7):971–979.

  93. Yang B and Guo L (2015) Superpixel based fusion and demosaicing for multi-focus Bayer images.Optik - International Journal for Light and Electron Optics, 126(23):4460–4468.

  94. Duan J, Chen L and Chen C (2016) Multifocus image fusion using superpixel segmentation and superpixel-based mean filtering.Applied Optics, 55 (36):10352–10362.

  95. Qiu X, Li M, Zhang L and Yuan X (2019) Guided filter-based multi- focus image fusion through focus region detection.Signal Processing: Image Communication, 72:35–46.

  96. Ahmed ST, and Sandhya M (2019) Real-time biomedical recursive images detection algorithm for Indian telemedicine environment. In Cognitive Informatics and Soft Computing: Proceeding of CISC 2017, pp. 723–731. Springer Singapore.

  97. Chai Y, Li H and Li Z (2011) Multifocus image fusion scheme using focused region detection and multiresolution.Optics Communications, 284 (19):4386–4389.

  98. Zhang LX (2020) Research on Pixel-Level Fast Fusion Methods for Multi-Source Images. University of Science and Technology Beijing, 06

  99. Li JX, Guo XB, Lu GM, Zhang B, Xu Y, Wu F, Zhang D (2020) DRPL:deep reression pair learning for multi-focus image fusion. IEEE Transactions on Image Processing 29:4816–4831.

    Article  Google Scholar 

  100. Jin SP, Yu BB, Jing MH, Zhou Y, Liang JJ, Ji RH (2022) DarkVisionNet:low-light imaging via RGB-NIR fusion with deep inconsistency prior. Proceedings of the AAAI Conference on Artificial Intelligence 36(1):1104–1112.

    Article  Google Scholar 

  101. Adeniyi JK, Adeniyi EA, Oguns YJ, Egbedokun GO, Ajagbe KD, Obuzor PC, Ajagbe SA (2022) Comparative Analysis of Machine Learning Techniques for the Prediction of Employee Performance. Paradigmplus 3(3):1–15.

    Article  Google Scholar 

  102. Ajagbe SA, Oki OA, Oladipupo MA and Nwanakwaugwu A (2022) Investigating the Efficiency of Deep Learning Models in Bioinspired Object Detection. International Conference on Electrical, Computer and Energy Technologies (ICECET). 2022:1–6.

  103. Adebisi OA, Ajagbe SA, Ojo JA, Oladipupo MA (2022) Computer Techniques for Medical Image Classification: A Review. International Journal of Advanced Computer Research. 03:19–36.

    Article  Google Scholar 

  104. Ajagbe SA, Amuda KA, Oladipupo MA, Afe OF, Okesola KI (2021) Multi-classification of alzheimer disease on magnetic resonance images (MRI) using deep convolutional neural network (DCNN) approaches. International Journal of Advanced Computer Research. 11(53):51–60.

    Article  Google Scholar 

  105. Li J, Yuan G and Fan H (2019) Multifocus image fusion using wavelet-domain-based deep CNN.Computational intelligence and neuroscience, 1–23.

  106. Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde Farley D, Qzair SJ and Coruville A (2014) Generative adversarial nets. Proceedings of the 27th International Conference on Neural Information Processing Systems.Cambridge: MIT Press, 12(2):2672–2680.

  107. Guo X, Nie R, Cao J, Zhou D, Mei L, He K, Member S, Nie R, Cao J, Zhou D (2019) FuseGAN: Learning to fuse Multi-focus Image via Conditional Generative Adversarial Network. IEEE Transactions on Multimedia 21(8):1982–1996.

    Article  Google Scholar 

  108. Li QL, Lu L, Li Z, Wu W, Liu Z, Jeon G, Yang XM (2019) Coupled GAN with Relativistic Discriminators for Infrared and Visible Images Fusion. IEEE Sensors Journal 6:7458–7467.

    Article  Google Scholar 

  109. Xu H, Ma J, Jiang J, Guo X, Ling H (2020) U2fusion: A unified unsupervised image fusion network. IEEE Transactions on Pattern Analysis and Machine Intelligence 44(1):502–518.

    Article  Google Scholar 

  110. Qu L, Liu S, Wang M, Song Z (2022) TransMEF: a transformer-based multi-exposure image fusion framework using self-supervised multi-task learning. Proceedings of the AAAI Conference on Artificial Intelligence 36(2):2126–2134.

    Article  Google Scholar 

  111. Aardt V and Jan (2008) Assessment of image fusion procedures using entropy, image quality, and multispectral classification. Journal of Applied Remote Sensing, 2(1):1–28.

  112. Rao YJ (1997) In-fibre bragg grating sensors.Measurement science and technology, 8(4):355–375.

  113. Hossny M, Nahavandi S and Creighton D (2008) Comments on information measure for performance of image fusion.Electronics Letters, 44 (18):1066–1067.

  114. Wang Q, Shen Y and Zhang J (2005) A nonlinear correlation measure for multivariable data set.Physica D Nonlinear Phenomena 200(3):287–295 .

  115. Xydeas CS and Pv V (2000) Objective image fusion performance measure.Military Technical Courier, 36(4):308 – 309.

  116. Zhao JY, Laganiere R, Liu Z (2007) Performance assessment of combinative pixel-level image fusion based on an absolute feature measurement. International Journal of Innovative Computing, Information and Control 6(3):1433–1447.

    Article  Google Scholar 

  117. Piella G and Heijmans H (2003) A new quality metric for image fusion//Proceedings of 10th IEEE International Conferenceon Image Processing (ICIP).Barcelona, Spain, IEEE:173–176.

  118. Wang Z and Bovik A (2002) A universal image quality index.IEEE Signal Processing Letters, 9 (3)81–84.

  119. Zhou W, Bovik AC and Sheikh HR (2004) Image quality assessment: from error visibility to structural similarity.IEEE Transactions on Image Processing, 13 (4):600–612.

  120. Yang C, Zhang J, Wang X and Liu X (2008) A novel similarity based quality metric for image fusion.Information Fusion, 9 (2):156–160

  121. Chen Y and Blum R (2009) A new automated quality assessment algorithm for image fusion.Image and Vision Computing, 27 (10) :1421–1432.

  122. Chen H and Varshney P (2007) A human perception inspired quality metric for image fusion based on regional information.Information Fusion, 8 (2):193–207.

  123. Liu Z, Blasch E, Xue Z and Zhao J, Laganiere R, Wu W (2012) Objective assessment of multiresolution image fusion algorithmsfor context enhancement in night vision: a comparative study.IEEE Transactions on Pattern Analysis and Machine Intelligence, 34 (1):94–109.

  124. Bhat S, Koundal D (2021) Multi-focus image fusion techniques: a survey. Artificial Intelligence Review.

    Article  Google Scholar 

  125. Nejati M, Samavi S, Shirani S (2015) Multi-focus image fusion using dictionary-based sparse representation. Information Fusion 25:72–84.

    Article  Google Scholar 

  126. The Whole Brain Atlas of Harvard Medical School. Accessed: Nov. 2, 2015. Online. Available:

  127. Zuo Y, Fang Y, Ma K (2023) The critical review of the growth of deep learning-based image fusion techniques. Journal of Image and Graphics 28(01):0102–0117.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations



Correspondence and requests for materials should be addressed to M.Z.

Corresponding author

Correspondence to Mingquan Zhou.

Ethics declarations

Conflicts of interest

All authors have participated in (a) conception and design or analysis and interpretation of the data; (b) drafting the article or revising it critically for important intellectual content; and (c) approval of the final version. This manuscript has not been submitted to or is under review at, another journal or other publishing venue. The authors have no affiliation with any organization with a direct or indirect financial interest in the subject matter discussed in the manuscript.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, R., Zhou, M., Zhang, D. et al. A survey of multi-source image fusion. Multimed Tools Appl 83, 18573–18605 (2024).

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: