Haar wavelet transform–based optimal Bayesian method for medical image fusion

Abstract

Image fusion (IF) attracts the researchers in the areas of the medical industry as valuable information could be afforded through the fusion of images that enable the clinical decisions to remain effective. With the aim to render an effective image fusion, this paper concentrates on the Bayesian fusion approach, which is tuned optimally using the proposed Fractional Bird Swarm Optimization (Fractional-BSA). The medical image fusion is progressed using the MRI brain image taken from the BRATS database, and the source images of multimodalities are fused effectively to present an information-rich fused image. The source images are subjected to the Haar wavelet transform, and the Bayesian fusion is performed using the Bayesian parameter, which is determined optimally using the proposed Fractional-BSA optimization. The proposed optimization is the integration of the fractional theory in the standard Bird Swarm Optimization (BSA), which improves the effectiveness of image fusion. Unlike any other existing methods, the proposed Fractional-BSA-based Bayesian Fusion approach renders a good quality and complex-free fusion experience. The analysis reveals that the method outperformed the existing methods with maximal mutual information, maximal peak signal-to-noise ratio (PSNR), minimal root mean square error (RMSE) of 1.5665, 44.0857 dB, and 5.4840, respectively.

Schematic diagram of medical image fusion

Medical IF is the significant research domain, which affords the fused image in such a way that this image carries a greater availability of the information content regarding any scene than the information carried by the single source image. Moreover, the concept of fusing multimodality enhances the contents in the image, which increases the reliability and overall information of the image. Thus, the efficient representation of the input data is made through the medical IF such that the physicians are assisted with a wide range of data for effective decision-making. Thus, the paper deals with the medical IF based on the Bayesian fusion approach for which the variable modalities of the image are used. The input image considered is the MRI brain image with four modalities, Flair, T2, T1, and T1C. Among the four modalities, any of the two modalities are considered the source images for fusion. The first step in IF is the generation of the wavelet coefficients, low–low (LL), high–low (HL), low–high (LH), and high–high (HH) using the Haar wavelet transform. Upon deriving the wavelet coefficients, the wavelets are fused based on the Bayesian fusion, which is progressed based on the proposed Fractional-BSA. Once the fused bands are formed, the inverse Haar wavelet transform generates the fused image, and it is significant to note that the IF is performed at the pixel level in such a way that the image quality is assured with a high level of the information for clinical applications. The advantages of the pixel-level fusion are regarding the original measured quantities, which involve directly in the fusion process.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

References

  1. 1.

    Santosh Kumar BP, Venkata Ramanaiah K (2019) An efficient hybrid optimization algorithm for image compression. Multimed Res 2(4):1–11

    Google Scholar 

  2. 2.

    Daga B, Bhute A, Ghatol A (2011) Implementation of parallel image processing using NVIDIA GPU framework. In: Proceedings of the international conference on advances in computing, communication and control. Springer, Berlin, pp 457–464

    Google Scholar 

  3. 3.

    Daga BS, Ghatol AA, Thakare VM (2017) Silhouette based human fall detection using multimodal classifiers for content based video retrieval systems. Proc Int Conf Intell Comput Instrum Control Technol (ICICICT):1409–1416

  4. 4.

    L Wang, B Li and L F Tian, (2013) A novel multi-modal medical image fusion method based on shift-invariant shearlet transforms, The Imaging Science Journal. 61(7):529–540

  5. 5.

    Xu X, Shana D, Wang G, Jiang X (2016) Multimodal medical image fusion using PCNN optimized by the QPSO algorithm. Appl Soft Comput 46:588–595

    Article  Google Scholar 

  6. 6.

    Bayrakdar ME (2019) Priority based health data monitoring with IEEE 802.11af technology in wireless medical sensor networks. Med Biol Eng Comput 57(12):2757–2769

    PubMed  Article  Google Scholar 

  7. 7.

    Ebenezer D, Anithaa J, Kamaleshwaranb KK, Rani I (2017) Optimum spectrum mask based medical image fusion using Gray Wolf Optimization. Biomed Signal Process Control 34:36–43

    Article  Google Scholar 

  8. 8.

    Li S, Kwok JT, Wang Y (2001) Combination of images with diverse focuses using the spatial frequency. Inform Fusion 2(3):169–176

    Article  Google Scholar 

  9. 9.

    Zong J-j, Qiua T-s (2017) Medical image fusion based on sparse representation of classified image patches. Biomed Signal Process Control 34:195–205

    Article  Google Scholar 

  10. 10.

    Majumdar S, Bharadwaj J (2014) Feature level fusion of multimodal images using Haar lifting wavelet transform. World AcadSci Eng Technol Int J Comput Inform Eng 8(6):1023–1027

  11. 11.

    Wang A, Sun H, Guan Y (2006) The application of wavelet transform to multimodality medical image fusion. Proc Int Conf Networking Sens Control IEEE Syst Man Cybern Soc IEEE 4:270–274

    Google Scholar 

  12. 12.

    Amolins K, Zhang Y (2007) Wavelet based image fusion techniques — an introduction, review and comparison. ISPRS J Photogramm Remote Sens 62:249–263

    Article  Google Scholar 

  13. 13.

    Le Pennec E, Mallat S (2005) Sparse geometric image representation with bandelets. IEEE Trans Image Process 14:423–438

    PubMed  Article  Google Scholar 

  14. 14.

    Gaurav Bhatnagar QM, Wu J, Liu Z (2013) Directive contrast based multimodal medical image fusion in NSCT domain. IEEE Trans Multimedia 15(5):1014–1024

  15. 15.

    Toet A (1989) A morphological pyramidal image decomposition. Pattern Recognit Lett 9(4):255–261

    Article  Google Scholar 

  16. 16.

    Singh S, Anand RS, Gupta D (2018) CT and MR image information fusion scheme using a cascaded framework in ripplet and NSST domain. IET Image Process 12(5):696–707

    Article  Google Scholar 

  17. 17.

    Li H, Manjunath B, Mitra SK (1995) Multisensor image fusion using the wavelet transform. Graph Models Image Process 57(3):235–245

    Article  Google Scholar 

  18. 18.

    Das S, Kundu MK (2012) NSCT-based multimodal medical image fusion using pulse-coupled neural network and modified spatial frequency. Med Biol Eng Comput 50(10):1105–1114

    PubMed  Article  Google Scholar 

  19. 19.

    Donoho DL (April 2006) Compressed sensing. IEEE Trans Inf Theory 52(4):1289–1306

    Article  Google Scholar 

  20. 20.

    Yang B, Li S (April 2010) Multifocus image fusion and restoration with sparse representation. IEEE Trans Instrum Meas 59(4):884–892

    Article  Google Scholar 

  21. 21.

    Bhaladhare PR, Jinwala DC (2016) A Clustering Approach Using Fractional Calculus-Bacterial Foraging Optimization Algorithm for k-Anonymization in Privacy Preserving Data Mining, International Journal of Information Security and Privacy (IJISP), IGI Global, 10(1):45–65.

  22. 22.

    Mengab X-B, Gaoc XZ, Lude L, Liub Y, Zhanga H (2016) A new bio-inspired optimisation algorithm: Bird Swarm Algorithm. J Exp Theor Artif Intell 28(4):673–687

  23. 23.

    Venkatrao PH, Damodar SS (2018) HWFusion: Holoentropy and SP-Whale optimisation-based fusion model for magnetic resonance imaging multimodal image fusion. IET Image Process 12(4):572–581

    Article  Google Scholar 

  24. 24.

    Bavirisetti DP, Dhuli R (2016) Fusion of infrared and visible sensor images based on anisotropic diffusion and Karhunen-Loeve transform. IEEE Sensors J 16(1):203–209

    Article  Google Scholar 

  25. 25.

    Bhateja V, Patel H, Krishn A, Sahu A, Lay-Ekuakille A (2015) Multimodal medical image sensor fusion framework using cascade of wavelet and contourlet transform domains. IEEE Sensors J 15(12):6783–6790

    Article  Google Scholar 

  26. 26.

    El-Zahraa F, El-Gamal A, Elmogy M, Atwan A (2016) Current trends in medical image registration and fusion. Egypt Informatics J 17(1):99–124

    Article  Google Scholar 

  27. 27.

    Tedmori S, Al-Najdawi N (2014) Image cryptographic algorithm based on the Haar wavelet transform. Inform Sci 269:21–34

    Article  Google Scholar 

  28. 28.

    Menze BH, Jakab A, Bauer S, Kalpathy-Cramer J, Farahani K, Kirby J, et al. (2015) The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS). IEEE Transactions on Medical Imaging 34(10):993–2024

  29. 29.

    Multimodal Brain Tumor Segmentation Challenge database taken from https://www.med.upenn.edu/sbia/brats2018/data.html. Accessed on September 2018

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Jayant Bhardwaj.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Bhardwaj, J., Nayak, A. Haar wavelet transform–based optimal Bayesian method for medical image fusion. Med Biol Eng Comput (2020). https://doi.org/10.1007/s11517-020-02209-6

Download citation

Keywords

  • Haar wavelet
  • Optimization
  • Fractional theory
  • MRI image
  • Image fusion