Skip to main content

Advertisement

Log in

Multi-modality medical image fusion based on guided filter and image statistics in multidirectional shearlet transform domain

  • Original Research
  • Published:
Journal of Ambient Intelligence and Humanized Computing Aims and scope Submit manuscript

Abstract

Due to varying imaging principles and complexity of human organ structures, single-modality image can only provide limited information. Multimodality image fusion is the technique which integrates multimodal images into a single image which improves the quality of images by retaining significant features and helps diagnostic imaging practitioners for accurate treatment and evaluation of medical problems. In the current prevailing image fusion techniques presents numerous challenges including the prevelance of fusion artefacts, design complexity and high computational cost. In this paper, a novel multimodal medical image fusion method has been presented to address these problems. The proposed approach is based on combination of guided filter and image statistics in shearlet transform domain. The multimodal images are subjugated to image decomposition using shearlet transform that captures textures information of original images in multidirectional orientations and then decompose these paired images in low-and high-frequency coefficients (i.e. base and detail layers). Then guided filter with high epsilon value is used to obtain weights of original paired images. These weights are then added to the base layer to obtained unified base layers. A guided image filter and image statistics fusion rule is used to fuse base layers to obtain a fused base layer in covariance matrix and Eigen values are computed to figure out the significant pixels in the neighborhood. Similarly, a choose max fusion rule is used to fuse the detail layers for reconstruction. A unified fused base and detail layers are merged together to obtain final fusion result using inverse shearlet transform. The proposed method is evaluated using medical image datasets. Experimental result demonstrates that our proposed algorithm exhibits promising results and outperforms other prevailing fusion techniques.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Data availability statement

The datasets for experiments are obtained from https://drive.google.com/drive/mobile/folders/0BzXT0LnoyRqlY2d0UTJnb2ZoMk0

References

  • Albahli S, Rauf HT, Arif M, Nafis MT, Algosaibi A (2021) Identification of thoracic diseases by exploiting deep neural networks. Neural Netw 5:6

    Google Scholar 

  • Amin J, Sharif A, Gul N, Anjum MA, Nisar MW, Azam F, Bukhari SAC (2020a) Integrated design of deep features fusion for localization and classification of skin cancer. Pattern Recognit Lett 131:63–70

    Google Scholar 

  • Amin J, Sharif M, Anjum MA, Raza M, Bukhari SAC (2020b) Convolutional neural network with batch normalization for glioma and stroke lesion detection using MRI. Cogn Syst Res 59:304–311

    Google Scholar 

  • Bai X, Zhang Y, Zhou F, Xue B (2015) Quadtree-based multi-focus image fusion using a weighted focus-measure. Inf Fus 22:105–118

    Google Scholar 

  • Bavirisetti DP, Dhuli R (2016a) Fusion of infrared and visible sensor images based on anisotropic diffusion and Karhunen–Loeve transform. IEEE Sens J 16(1):203–209

    Google Scholar 

  • Bavirisetti DP, Dhuli R (2016b) Two-scale image fusion of visible and infrared images using saliency detection. Infrared Phys Technol 76:52–64

    Google Scholar 

  • Bavirisetti D, Xiao G (2017) Multi-sensor image fusion based on fourth order partial differential equations. In: 2017 20th international conference on information fusion (fusion). IEEE, pp 1–9

  • Bavirisetti DP, Kollu V, Gang X, Dhuli R (2017) Fusion of MRI and CT images using guided image filter and image statistics. Int J Imaging Syst Technol 27(3):227–237

    Google Scholar 

  • Dogra A, Kadry S, Goyal B, Agrawal S (2020) An efficient image integration algorithm for night mode vision applications. Multim Tools Appl 79(15):10995–11012

    Google Scholar 

  • Faragallah O, El-Hoseny H, El-Shafai W, El-Rahman WA, El-Sayed HS, El-Rabaie E-S, El-Samie FA, Geweid GGN (2021) A comprehensive survey analysis for present solutions of medical image fusion and future directions. IEEE Access 9:11358–11371

    Google Scholar 

  • Goyal B, Lepcha DC, Bhateja V, Lay-Ekuakille A (2021) Measurement and analysis of multi-modal image fusion metrics based on structure awareness using domain transform filtering. Measurement 182:109663

    Google Scholar 

  • Ji XX (2016) An improved image fusion method of infrared image and SAR Image via shearlet and sparse representation. In: 2016 8th international conference on intelligent human-machine systems and cybernetics (IHMSC), vol 1. IEEE, pp 74–77

  • Jose J, Gautam N, Tiwari M, Tiwari T, Suresh A, Sundararaj V, Rejeesh MR (2021) An image quality enhancement scheme employing adolescent identity search algorithm in the NSST domain for multimodal medical image fusion. Biomed Signal Process Control 66:102480

    Google Scholar 

  • Kaur M, Singh D (2021) Multi-modality medical image fusion technique using multi-objective differential evolution based deep neural networks. J Ambient Intell Humaniz Comput 12(2):2483–2493

    Google Scholar 

  • Khare A, Khare M, Srivastava R (2021) Shearlet transform based technique for image fusion using median fusion rule. Multim Tools Appl 80(8):11491–11522

    Google Scholar 

  • Kumar BS (2013) Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform. SIViP 7(6):1125–1143

    Google Scholar 

  • Kurban T (2021) Fusion of remotely sensed infrared and visible images using Shearlet transform and backtracking search algorithm. Int J Remote Sens 42(13):5087–5104

    Google Scholar 

  • L Yu, Chen X, Peng H, Wang Z (2017) Multi-focus image fusion with a deep convolutional neural network. Inf Fus 36:191–207

    Google Scholar 

  • Lepcha DC, Goyal B, Dogra A (2020) Image fusion based on cross bilateral and rolling guidance filter through weight normalization. Open Neuroimag J 13(1):51–61

    Google Scholar 

  • Li S, Kang X, Hu J (2013) Image fusion with guided filtering. IEEE Trans Image Process 22(7):2864–2875

    Google Scholar 

  • Li W, Xie Y, Zhou H, Han Y, Zhan K (2018a) Structure-aware image fusion. Optik 172:1–11

    Google Scholar 

  • Li H, Wu X-J, Kittler J (2018b) Infrared and visible image fusion using a deep learning framework. In: 2018b 24th international conference on pattern recognition (ICPR). IEEE, pp 2705–2710

  • Li Y, Zhao J, Lv Z, Li J (2021) Medical image fusion method by deep learning. Int J Cogn Comput Eng 2:21–29

    Google Scholar 

  • Liu Y, Wang Z (2013) Multi-focus image fusion based on wavelet transform and adaptive block. J Image Graphics 18(11):1435–1444

    Google Scholar 

  • Liu Yu, Liu S, Wang Z (2015) Multi-focus image fusion with dense SIFT. Inf Fus 23(1):139–155

    Google Scholar 

  • Liu Yu, Wang L, Cheng J, Li C, Chen X (2020) Multi-focus image fusion: a survey of the state of the art. Inf Fus 64:71–91

    Google Scholar 

  • Ma J, Chen C, Li C, Huang J (2016) Infrared and visible image fusion via gradient transfer and total variation minimization. Inf Fus 31:100–109

    Google Scholar 

  • Ma J, Zhou Z, Wang B, Zong H (2017) Infrared and visible image fusion based on visual saliency map and weighted least square optimization. Infrared Phys Technol 82:8–17

    Google Scholar 

  • Naidu VPS (2010) Discrete cosine transform-based image fusion. Def Sci J 60(1):48–54

    MathSciNet  Google Scholar 

  • Naidu VPS (2011) Image fusion technique using multi-resolution singular value decomposition. Def Sci J 61(5):479–484

    MathSciNet  Google Scholar 

  • Nair RR, Singh T (2021) An optimal registration on shearlet domain with novel weighted energy fusion for multi-modal medical images. Optik 225:165742

    Google Scholar 

  • Panguluri SK, Mohan L (2020) Efficient DWT based fusion algorithm for improving contrast and edge preservation. Int J Adv Comput Sci Appl 11(10):123–131

    Google Scholar 

  • Parvathy S, Velmurugan SP, Sampson J (2020) A novel approach in multimodality medical image fusion using optimal shearlet and deep learning. Int J Imaging Syst Technol 30(4):847–859

    Google Scholar 

  • Paul S, Sevcenco IS, Agathoklis P (2016) Multi-exposure and multi-focus image fusion in gradient domain. J Circuits Syst Comput 25(10):1650123

    Google Scholar 

  • Petrovic V, Xydeas C (2005) Objective image fusion performance characterisation. In: Tenth IEEE international conference on computer vision (ICCV'05) volume 1, vol 2. IEEE, pp 1866–1871

  • Połap D (2019) Analysis of skin marks through the use of intelligent things. IEEE Access 7:149355–149363

    Google Scholar 

  • Połap D, Srivastava G (2021) Neural image reconstruction using a heuristic validation mechanism. Neural Comput Appl 33(17):10787–10797

    Google Scholar 

  • Rajinikanth V, Kadry S, Taniar D, Damaševičius R, Rauf HT (2021) Breast-cancer detection using thermal images with marine-predators-algorithm selected features. In: 2021 seventh international conference on bio signals, images, and instrumentation (ICBSII), pp 1–6. IEEE

  • Rodriguez-Sánchez R, García JA, Fdez-Valdivia J (2011) From computational attention to image fusion. Pattern Recognit Lett 32(14):1778–1795

    Google Scholar 

  • Shreyamsha Kumar BK (2015) Image fusion based on pixel significance using cross bilateral filter. Signal Image Video Process 9(5):1193–1204

    Google Scholar 

  • Tawfik N, Elnemr HA, Fakhr M, Dessouky MI, El-Samie FEA (2021) Survey study of multimodality medical image fusion methods. Multim Tools Appl 80(4):6369–6396

    Google Scholar 

  • Toet A, Hogervorst MA (2016) Multiscale image fusion through guided filtering. In: Target and background signatures II, vol. 9997, p. 99970J. International Society for Optics and Photonics

  • Ullah H, Ullah B, Wu L, Abdalla FYO, Ren G, Zhao Y (2020) Multi-modality medical images fusion based on local-features fuzzy sets and novel sum-modified-Laplacian in non-subsampled shearlet transform domain. Biomed Signal Process Control 57:101724

    Google Scholar 

  • Vijayarajan R, Muttan S (2015) Discrete wavelet transform based principal component averaging fusion for medical images. AEU Int J Electron Commun 69(6):896–902

    Google Scholar 

  • Wang W, Chang F (2011) A multi-focus image fusion method based on Laplacian pyramid. JCP 6(12):2559–2566

    Google Scholar 

  • Wang Z, Ma Y (2008) Medical image fusion using m-PCNN. Inf Fus 9(2):176–185

    MathSciNet  Google Scholar 

  • Wang L, Dou J, Qin P, Lin S, Gao Y, Wang R, Zhang J (2021) Multimodal medical image fusion based on nonsubsampled shearlet transform and convolutional sparse representation. Multim Tools Appl 80(30):36401–36421

    Google Scholar 

  • Zhan K, Xie Y, Wang H, Min Y (2017) Fast filtering image fusion. J Electron Imaging 26(06):1

    Google Scholar 

Download references

Acknowledgements

The authors are thankful to CSIR, New Delhi for providing an opportunity to work in CSIR-CSIO, Chandigarh on the work under CSIR-Nehru Post-Doctoral Fellowship

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ayush Dogra.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dogra, A., Kumar, S. Multi-modality medical image fusion based on guided filter and image statistics in multidirectional shearlet transform domain. J Ambient Intell Human Comput 14, 12191–12205 (2023). https://doi.org/10.1007/s12652-022-03764-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12652-022-03764-6

Keywords

Navigation