Skip to main content
Log in

Multi-layer, multi-modal medical image intelligent fusion

  • 1221: Deep Learning for Image/Video Compression and Visual Quality Assessment
  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Recently, deep learning has high popularity in the field of image processing due to its unique feature extraction property. This paper, proposes a novel multi-layer, multi-tier system called Multi-Layer Intelligent Image Fusion(MLIIF) with deep learning(DL) networks for visually enhanced medical images through fusion. Implemented deep feature based multilayer fusion strategy for both high frequency and low frequency components to obtain more informative fused image from the source image sets. The hybrid MLIIF consists of VGG-19, VGG-11, and Squeezenet DL networks for different layer deep feature extraction from approximation and detailed frequency components of the source images. The robustness of the proposed multi-layer, multi-tier fusion system is validated by subjective and objective analysis. The effectiveness of the proposed MLIIF system is evaluated by error image calculation with the ground truth image and thus accuracy of the system. The source images utilized for the experimentations are collected from the website www.med.harvard.edu and the proposed MLIIF system obtained an accuracy of 95%. The experimental findings indicate that the proposed system outperforms compared with existing DL networks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  1. A. T, Parameswaran L (2013) A computationally efficient edge preserving mri-ct image fusion technique using complex wavelet transform and phase congruency fusion rule. Europ J Sci Res 112(4):469–483

    Google Scholar 

  2. Calhoun V D, Adali T (2008) Feature-based fusion of medical imaging data. IEEE Trans Inf Technol Biomed 13(5):711–720

    Article  Google Scholar 

  3. Dasarathy B V (2012) Information fusion in the realm of medical applications-a bibliographic glimpse at its growing appeal. Inform Fus 13(1):1–9

    Article  Google Scholar 

  4. Garcia-Gasulla D, Parés F, Vilalta A, Moreno J, Ayguadé E, Labarta J, Cortés U, Suzumura T (2018) On the behavior of convolutional nets for feature extraction. J Artif Intell Res 61:563–592

    Article  MathSciNet  MATH  Google Scholar 

  5. Gatys L A, Ecker A S, Bethge M (2016) Image style transfer using convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2414–2423

  6. Haritha KC, Jeyakumar G, Thangavelu S (2017) Image fusion using evolutionary algorithms: a survey. In: 2017 4th International Conference on Advanced Computing and Communication Systems (ICACCS). IEEE, pp 1–7

  7. Hermessi H, Mourali O, Zagrouba E (2018) Convolutional neural network-based multimodal image fusion via similarity learning in the shearlet domain. Neural Comput Applic 30(7):2029–2045

    Article  Google Scholar 

  8. Hou R, Zhou D, Nie R, Liu D, Ruan X (2019) Brain ct and mri medical image fusion using convolutional neural networks and a dual-channel spiking cortical model. Med Biol Eng Comput 57(4):887–900

    Article  Google Scholar 

  9. Huang X, Belongie S (2017) Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference On Computer Vision, pp 1501–1510

  10. Iandola F N, Han S, Moskewicz M W, Ashraf K, Dally W J, Keutzer K (2016) Squeezenet: Alexnet-level accuracy with 50x fewer parameters and< 0.5 mb model size. arXiv:1602.07360

  11. Iglovikov V, Shvets A (2018) Ternausnet: U-net with vgg11 encoder pre-trained on imagenet for image segmentation. arXiv:1801.05746

  12. James A P, Dasarathy B V (2014) Medical image fusion: a survey of the state of the art. Inform Fus 19:4–19

    Article  Google Scholar 

  13. Johnson. K A, Becker J A (2021) The whole brain atlas. https://www.med.harvard.edu/aanlib/home.html. Accessed 30 Dec 2018

  14. Kaur J, Shekhar C (2020) Multimodal medical image fusion using deep learning. In: Advances in computational techniques for biomedical image analysis. Elsevier, pp 35–56

  15. Krishnamoorthy S, Soman KP (2010) Implementation and comparative study of image fusion algorithms. Int J Comput Applic 9(2):25–35

    Article  Google Scholar 

  16. Krizhevsky A, Sutskever I, Hinton G E (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105

  17. LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324

    Article  Google Scholar 

  18. Li H, Wu X-J, Kittler J (2018) Infrared and visible image fusion using a deep learning framework. In: 2018 24th International conference on pattern recognition (ICPR). IEEE, pp 2705–2710

  19. Li S, Kang X, Hu J (2013) Image fusion with guided filtering. IEEE Trans Image Process 22(7):2864–2875

    Article  Google Scholar 

  20. Ma J, Zhou Z, Wang B, Zong H (2017) Infrared and visible image fusion based on visual saliency map and weighted least square optimization. Infrared Phys Technol 82:8–17

    Article  Google Scholar 

  21. Moushmi S, Sowmya V, Soman KP (2016) Empirical wavelet transform for multifocus image fusion. In: Proceedings of the international conference on soft computing systems. Springer, pp 257–263

  22. Nair R R, Singh T (2019) Multi-sensor medical image fusion using pyramid-based dwt: a multi-resolution approach. IET Image Process 13(9):1447–1459

    Article  Google Scholar 

  23. Nair R R, Singh T (2020) Multi-modal based msmif using hybrid fusion with 1-d wavelet transform. IJAST 29(5):5353–5368

    Google Scholar 

  24. Nair R R, Singh T (2021) Mamif: multimodal adaptive medical image fusion based on b-spline registration and non-subsampled shearlet transform. Multimed Tools Appl 80(12):19079–19105

    Article  Google Scholar 

  25. Nair R R, Singh T (2021) An optimal registration on shearlet domain with novel weighted energy fusion for multi-modal medical images. Optik 225:165742

    Article  Google Scholar 

  26. Nair S, Elias B, Naidu VPS (2007) Pixel level image fusion using fuzzylet fusion algorithm. IJAREEIE An ISO, 3297

  27. Parvathy V S, Pothiraj S, Sampson J (2020) Optimal deep neural network model based multimodality fused medical image classification. Phys Commun, 101119

  28. Qassim H, Feinzimer D, Verma A (2017) Residual squeeze vgg16. arXiv:1705.03004

  29. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556

  30. Wu F, Jing X-Y, Dong X, Hu R, Yue D, Wang L, Ji Y-M, Wang R, Chen G (2018) Intraspectrum discrimination and interspectrum correlation analysis deep network for multispectral face recognition. IEEE Trans Cybern 50 (3):1009–1022

    Article  Google Scholar 

  31. Wu F, Jing X-Y, Feng Y, Ji Y-, Wang R (2021) Spectrum-aware discriminative deep feature learning for multi-spectral face recognition. Pattern Recogn 111:107632

    Article  Google Scholar 

  32. Zhang Y-D, Dong Z, Wang S-H, Yu X, Yao X, Zhou Q, Hu H, Li M, Jiménez-Mesa C, Ramirez J et al (2020) Advances in multimodal data fusion in neuroimaging: overview, challenges, and novel orientation. Inform Fus 64:149–187

    Article  Google Scholar 

  33. Zhao Y, Yin Y, Fu D (2008) Decision-level fusion of infrared and visible images for face recognition. In: 2008 Chinese control and decision conference. IEEE, pp 2411–2414

Download references

Acknowledgments

The success of any research work is infinite, trustworthy and suitable information. HCG, Cancer research hospital, Bangalore is appreciated for its subjective assessment contributions. Appreciation is given to the valuable contribution by Dr. Ravi Nayar, ENT Surgeon and the dean, Dr. Shiv Kumar, head of radiology and the HCG team. For the valued advice and technical suggestions at all stages of our work, we thank Dr. VPS Naidu, Principal Scientist, and Assoc. Prof. (AcSIR), National Aerospace Laboratories (NAL).

Funding

The authors did not receive support from any organization for the submitted work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tripty Singh.

Ethics declarations

Conflict of Interests

The authors have no conflicts of interest to declare that are relevant to the content of this article.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nair, R.R., Singh, T., Basavapattana, A. et al. Multi-layer, multi-modal medical image intelligent fusion. Multimed Tools Appl 81, 42821–42847 (2022). https://doi.org/10.1007/s11042-022-13482-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-022-13482-y

Keywords

Navigation