Skip to main content

Advertisement

Log in

Multimodal Medical Image Fusion Using Stacked Auto-encoder in NSCT Domain

  • Published:
Journal of Digital Imaging Aims and scope Submit manuscript

Abstract

Medical image fusion is a process that aims to merge the important information from images with different modalities of the same organ of the human body to create a more informative fused image. In recent years, deep learning (DL) methods have achieved significant breakthroughs in the field of image fusion because of their great efficiency. The DL methods in image fusion have become an active topic due to their high feature extraction and data representation ability. In this work, stacked sparse auto-encoder (SSAE), a general category of deep neural networks, is exploited in medical image fusion. The SSAE is an efficient technique for unsupervised feature extraction. It has high capability of complex data representation. The proposed fusion method is carried as follows. Firstly, the source images are decomposed into low- and high-frequency coefficient sub-bands with the non-subsampled contourlet transform (NSCT). The NSCT is a flexible multi-scale decomposition technique, and  it is superior to traditional decomposition techniques in several aspects. After that, the SSAE is implemented for feature extraction to obtain a sparse and deep representation from high-frequency coefficients. Then, the spatial frequencies are computed for the obtained features to be used for high-frequency coefficient fusion. After that, a maximum-based fusion rule is applied to fuse the low-frequency sub-band coefficients. The final integrated image is acquired by applying the inverse NSCT. The proposed method has been applied and assessed on various groups of medical image modalities. Experimental results prove that the proposed method could effectively merge the multimodal medical images, while preserving the detail information, perfectly.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Data Availability

The data that support the findings of this study are openly available. All the source images can be obtained from http://www.metapix.de/fusion.htm, www.med.harvard.edu/aanlib/home.html, and www.imagefusion.org.

Code Availability

The code that supports the findings of this study is available from the corresponding author, Nahed Tawfik, upon reasonable request.

References

  1. Halalli B, Makandar A: Computer Aided Diagnosis - Medical Image Analysis Techniques. In: Breast Imaging. InTech, 2018.

  2. Abdulla AA: Efficient computer‐aided diagnosis technique for leukaemia cancer detection. IET Image Process 14:4435–4440. https://doi.org/10.1049/iet-ipr.2020.0978, 2020.

    Article  Google Scholar 

  3. Liu Y, Chen X, Cheng J, Peng H: A medical Image Fusion Method Based on Convolutional Neural Networks. In: 20th International Conference on Information Fusion, Fusion 2017 - Proceedings. pp 1–7, 2017

  4. Huang B, Yang F, Yin M, et al: A Review of Multimodal Medical Image Fusion Techniques. Comput Math Methods Med 2020:. https://doi.org/10.1155/2020/8279342, 2020.

  5. Liu Y, Chen X, Wang Z, et al: Deep Learning for Pixel-level Image Fusion: Recent Advances and Future Prospects. Inf Fusion 42:158–173. https://doi.org/10.1016/j.inffus.2017.10.007, 2018.

    Article  Google Scholar 

  6. Kaur H, Koundal D, Kadyan V: Image Fusion Techniques: A Survey. Arch Comput Methods Eng. https://doi.org/10.1007/s11831-021-09540-7, 2021

  7. Tawfik N, Elnmer HA, Fakhr M, et al: Survey Study of Multimodality Medical Image Fusion Methods. Multimed Tools Appl 1–28. https://doi.org/10.1007/s11042-020-08834-5, 2020.

  8. Balachander B, Dhanasekaran D: Comparative Study of Image Fusion Techniques in Spatial and Transform Domain. ARPN J Eng Appl Sci 11:5779–5783, 2016.

    Google Scholar 

  9. Tirupal T, Mohan BC, Kumar SS: Multimodal Medical Image Fusion Techniques – A Review. Curr Signal Transduct Ther 15:. https://doi.org/10.2174/1574362415666200226103116, 2020.

  10. Yadav SP, Yadav S: Image Fusion Using Hybrid Methods in Multimodality Medical Images. Med Biol Eng Comput 58:669–687. https://doi.org/10.1007/s11517-020-02136-6, 2020.

    Article  PubMed  Google Scholar 

  11. Ganasala P, Kumar V: CT and MR image fusion scheme in nonsubsampled contourlet transform domain. J Digit Imaging 27:407–418. https://doi.org/10.1007/s10278-013-9664-x, 2014.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Arif M, Wang G: Fast Curvelet Transform Through Genetic Algorithm for Multimodal Medical Image Fusion. Soft Comput 24:1815–1836. https://doi.org/10.1007/s00500-019-04011-5, 2020.

    Article  Google Scholar 

  13. Kayani BN, Mirza AM, Bangash A, Iftikhar H: Pixel & Feature Level Multiresolution Image Fusion Based on Fuzzy Logic. Innov Adv Tech Comput Inf Sci Eng Springer 129–132, 2007.

  14. Hermessi H, Mourali O, Zagrouba E: Multimodal medical image fusion review: Theoretical background and recent advances. Signal Processing 183:. https://doi.org/10.1016/j.sigpro.2021.108036, 2021.

  15. Xia K jian, Yin H sheng, Wang J qiang: A novel Improved Deep Convolutional Neural Network Model for Medical Image Fusion. Cluster Comput 1:1–13. https://doi.org/10.1007/s10586-018-2026-1, 2018.

  16. Hou R, Zhou D, Nie R, et al: Brain CT and MRI Medical Image Fusion Using Convolutional Neural Networks and A dual-Channel Spiking Cortical Model. Med Biol Eng Comput 887–900. https://doi.org/10.1007/s11517-018-1935-8, 2019.

  17. Hermessi H, Mourali O, Zagrouba E: Convolutional Neural Network-Based Multimodal Image Fusion Via Similarity Learning in The shearlet Domain. Neural Comput Appl 30:2029–2045. https://doi.org/10.1007/s00521-018-3441-1, 2018.

    Article  Google Scholar 

  18. Liu X, Mei W, Du H: Multi-modality Medical Image Fusion Based on Image Decomposition Framework and Nonsubsampled Shearlet Transform. Biomed Signal Process Control 40:343–350. https://doi.org/10.1016/j.bspc.2017.10.001, 2018.

    Article  Google Scholar 

  19. Xia J, Lu Y, Tan L: Research of Multimodal Medical Image Fusion Based on Parameter-Adaptive Pulse-Coupled Neural Network and Convolutional Sparse Representation. Comput Math Methods Med 2020:. https://doi.org/10.1155/2020/3290136, 2020.

  20. Xu X, Shan D, Wang G, Jiang X: Multimodal Medical Image Fusion Using PCNN Optimized by the QPSO Algorithm. Appl Soft Comput J 46:588–595. https://doi.org/10.1016/j.asoc.2016.03.028, 2016.

    Article  Google Scholar 

  21. Subbiah Parvathy V, Pothiraj S, Sampson J: A novel Approach in Multimodality Medical Image Fusion Using Optimal Shearlet and Deep Learning. Int J Imaging Syst Technol 1–13. https://doi.org/10.1002/ima.22436, 2020.

  22. Maqsood S, Javed U: Multi-modal Medical Image Fusion based on Two-scale Image Decomposition and Sparse Representation. Biomed Signal Process Control 57:. https://doi.org/10.1016/j.bspc.2019.101810, 2020.

  23. B.Rajalingam RP: Multimodal Medical Image Fusion based on Deep Learning Neural Network for Clinical Treatment Analysis. Int J ChemTech Res 11:160–176. https://doi.org/10.20902/ijctr.2018.110621, 2018.

  24. Yin M, Liu X, Liu Y, Chen X: Medical Image Fusion With Parameter-Adaptive Pulse Coupled-Neural Network in Nonsubsampled Shearlet Transform Domain. IEEE Trans Instrum Meas 1–16. https://doi.org/10.1109/TIM.2018.2838778, 2018.

  25. Yin F, Gao W, Song Z: Medical Image Fusion based on Feature Extraction and Sparse Representation. Int J Biomed Imaging 2017:. https://doi.org/10.1155/2017/3020461, 2017.

  26. Rajalingam B, Priya R, Bhavani R: Multimodal Medical Image Fusion Using Hybrid Fusion Techniques for Neoplastic and Alzheimer’s Disease Analysis. J Comput Theor Nanosci 16:1320–1331. https://doi.org/10.1166/jctn.2019.8038, 2019.

    Article  CAS  Google Scholar 

  27. Prakash O, Park CM, Khare A, et al: Multiscale Fusion of Multimodal Medical Images Using Lifting Scheme based Biorthogonal Wavelet Transform. Optik (Stuttg) 182:995–1014. https://doi.org/10.1016/j.ijleo.2018.12.028, 2019.

    Article  Google Scholar 

  28. Tawfik N, Elnemr HA, Fakhr M, et al: Hybrid pixel-feature fusion system for multimodal medical images. J Ambient Intell Humaniz Comput 1–18. https://doi.org/10.1007/s12652-020-02154-0, 2021.

  29. Bhardwaj J, Nayak A, Gambhir D: Multimodal Medical Image Fusion Based on Discrete Wavelet Transform and Genetic Algorithm. In: International Conference on Innovative Computing and Communications. Springer Singapore, pp 1047–1057, 2021.

  30. Veshki FG, Ouzir N, Vorobyov S, Ollila E: Coupled Feature Learning for Multimodal Medical Image Fusion. arXiv Prepr arXiv 2102:1–12, 2021.

  31. Zhang H, Yan W, Zhang C, Wang L: Research on Image Fusion Algorithm Based on NSST Frequency Division and Improved LSCN. Mob Networks Appl. https://doi.org/10.1007/s11036-020-01728-8, 2021.

    Article  Google Scholar 

  32. da Cunha AL, Zhou J, Do MN: The Nonsubsampled Contourlet Transform: Theory, design, and applications. IEEE Trans Image Process 15:3089–3101. https://doi.org/10.1109/TIP.2006.877507, 2006.

    Article  PubMed  Google Scholar 

  33. Hossain F, Alsharif MR, Yamashita K: A New Image Enhancement Method Based on Nonsubsampled Contourlet Transform. In: International Conference on Advanced Communication and Networking. Springer Berlin Heidelberg, pp 74–80, 2010.

  34. Supratak A, Li L, Guo Y: Feature Extraction with Stacked Autoencoders for Epileptic Seizure Detection. In: 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2014. pp 4184–4187, 2014.

  35. Vareka L, Mautner P: Stacked Autoencoders for The P300 Component Detection. Front Neurosci 11:1–9. https://doi.org/10.3389/fnins.2017.00302, 2017.

    Article  Google Scholar 

  36. Singh V, Verma NK, Islam ZU, Cui Y: Feature Learning Using Stacked Autoencoder for Shared and Multimodal Fusion of Medical Images. Comput Intell Theor Appl Futur Dir I:53–66. https://doi.org/10.1007/978-981-13-1132-1, 2019.

    Article  Google Scholar 

  37. Liu G, Bao H, Han B: A Stacked Autoencoder-Based Deep Neural Network for Achieving Gearbox Fault Diagnosis. Math Probl Eng 2018:. https://doi.org/10.1155/2018/5105709, 2018.

  38. Eskicioglu AM, Fisher PS: Image Quality Measures and Their Performance. In: IEEE Transactions on communications. IEEE, pp 2959–2965, 1995.

  39. Vadivel A, Sural S, Majumdar AK: Human Color Perception in The HSV Space and its Application in Histogram Generation for Image Retrieval. In: Color Imaging X: Processing, Hardcopy, and Applications. International Society for Optics and Photonics, p 598, 2005.

  40. Bora DJ, Gupta AK, Khan FA: Comparing the Performance of L*A*B* and HSV Color Spaces with Respect to Color Image Segmentation. Int J Emerg Technol Adv Eng 5:192–203, 2015.

    Google Scholar 

  41. Nandal A, Rosales HG: Enhanced image fusion using directional contrast rules in fuzzy transform domain. Springerplus 5:. https://doi.org/10.1186/s40064-016-3511-8, 2016.

  42. El-Hoseny HM, El-Rahman WA, El-Shafai W, et al: Efficient Multi-scale Non-sub-sampled Shearlet Fusion System Based on Modified Central Force Optimization and Contrast Enhancement. Infrared Phys Technol 102:102975. https://doi.org/10.1016/j.infrared.2019.102975, 2019.

  43. Haghighat MBA, Aghagolzadeh A, Seyedarabi H: A non-reference Image Fusion Metric Based on Mutual Information of Image Features. Comput Electr Eng 37:744–756. https://doi.org/10.1016/j.compeleceng.2011.07.012, 2011.

    Article  Google Scholar 

  44. Xydeas CS, Petrovid V: Objective Image Fusion Performance Measure. Electron Lett 36:308–309. https://doi.org/10.1017/CBO9781107415324.004, 2000.

    Article  Google Scholar 

  45. Xia J, Chen Y, Chen A, Chen Y: Medical Image Fusion Based on Sparse Representation and PCNN in NSCT Domain. Comput Math Methods Med 2018:. https://doi.org/10.1155/2018/2806047, 2018.

  46. Shahdoosti HR, Tabatabaei Z: MRI and PET/SPECT image fusion at feature level using ant colony based segmentation. Biomed Signal Process Control 47:63–74. https://doi.org/10.1016/j.bspc.2018.08.017, 2019.

    Article  Google Scholar 

  47. Tan W, Tiwari P, Pandey HM, et al: Multimodal medical image fusion algorithm in the era of big data. Neural Comput Appl 2:. https://doi.org/10.1007/s00521-020-05173-2, 2020.

  48. Tirupal T, Chandra Mohan B, Srinivas Kumar S: Multimodal medical image fusion based on yager’s intuitionistic fuzzy sets. Iran J Fuzzy Syst 16:33–48. https://doi.org/10.22111/IJFS.2019.4482, 2019.

  49. Ganasala P, Kumar V: Feature-Motivated Simplified Adaptive PCNN-Based Medical Image Fusion Algorithm in NSST Domain. J Digit Imaging 29:73–85. https://doi.org/10.1007/s10278-015-9806-4, 2016.

    Article  PubMed  Google Scholar 

  50. Ramlal SD, Sachdeva J, Kamal C, Niranjan A: Multimodal medical image fusion using non-subsampled shearlet transform and pulse coupled neural network incorporated with morphological gradient. Signal, Image Video Process 12:1479–1487. https://doi.org/10.1007/s11760-018-1303-z, 2018.

    Article  Google Scholar 

  51. Yang Y, Que Y, Huang S, Lin P: Multimodal Sensor Medical Image Fusion Based on Type-2 Fuzzy Logic in NSCT Domain. IEEE Sens J 16:3735–3745. https://doi.org/10.1109/JSEN.2016.2533864, 2016.

    Article  Google Scholar 

  52. Zhu Z, Chai Y, Yin H, et al: A novel Dictionary Learning Approach for Multi-modality Medical Image Fusion. Neurocomputing 214:471–482. https://doi.org/10.1016/j.neucom.2016.06.036, 2016.

    Article  Google Scholar 

Download references

Acknowledgements

This work has been achieved at Computers and Systems Department, Electronics Research Institute.

Author information

Authors and Affiliations

Authors

Contributions

The idea for the manuscript was conceived at a meeting attended by Nahed Tawfik, Heba A. Elnemr, Mahmoud Fakhr, Moawad I. Dessouky, and Fathi E. Abd El-Samie. Nahed Tawfik and Heba A. Elnemr participated in implementing all experiments and analyzing the experimental results. In addition, Nahed Tawfik and Heba A. Elnemr participated in the research plan, the study organization, and the manuscript writing. All authors reviewed the manuscript and were involved in its critical revision before submission.

Corresponding author

Correspondence to Nahed Tawfik.

Ethics declarations

Ethics Approval

This article is original and contains unpublished material. The corresponding author confirms that no ethical issues involved.

Consent to Participate

I understand that even if I agree to participate now, I can withdraw at any time or refuse to answer any question without any consequences of any kind.

Consent for Publication

I understand that the text and any pictures or videos published in the article.

  1. 1.

    will be used only in educational publications intended for professionals or

  2. 2.

    if the publication or product is published on an open access basis. I understand that it will be freely available on the internet and may be seen by the general public.

The pictures, videos, and text may also appear on other websites or in print, may be translated into other languages, or used for commercial purposes.

Conflict of Interest

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tawfik, N., Elnemr, H.A., Fakhr, M. et al. Multimodal Medical Image Fusion Using Stacked Auto-encoder in NSCT Domain. J Digit Imaging 35, 1308–1325 (2022). https://doi.org/10.1007/s10278-021-00554-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10278-021-00554-y

Keywords

Navigation