Advertisement

A sum-modified-Laplacian and sparse representation based multimodal medical image fusion in Laplacian pyramid domain

  • Xiaoqing Li
  • Xuming Zhang
  • Mingyue DingEmail author
Original Article
  • 2 Downloads

Abstract

Fusion of multimodal medical images provides complementary information for diagnosis, surgical planning, and clinical outcome evaluation. Although the multiscale decomposition–based fusion methods have attracted much attention among researchers, the challenges of determining the decomposition levels and the loss of contrast hindered their applications. Here, we present a multimodal medical images fusion method combining the sum-modified-Laplacian (SML) with sparse representation (SR) in the Laplacian pyramid domain. In this method, we first transformed the original images into the high-pass and low-pass bands by the Laplacian pyramid (LP). Then, we use SML and SR to fuse the high- and low-pass bands, respectively. The proposed method has been compared with different methods including NSST_VGG_MAX, DWT_ARV_BURTS, CVT_MAX_LIS, and NSCT_SR_MAX. We also conducted multiple experiments on four groups of medical images, including CT and MR, T1-weighted MR and T2-weighted MR, PET and MR, as well as SPECT and MR, to demonstrate the advantages of our method. Visual and quantitative results illustrate that our method can produce the fused images with better brightness contrast and retain more image details than other evaluated methods on the basis of MI, LAB/F, QAB/F, and Qw. Furthermore, our method could preserve more fine and useful functional information with better image contrast, which is highly relevant in the assessment of lesion shapes and positions.

Graphical abstract

The basic framework of the proposed fusion method based on sum-modified-Laplacian and sparse representation based in Laplacian pyramid domain using CT and MR images.

In the proposed method, the source images are transformed into the low-pass bands and the high-pass bands using the Laplacian pyramid (LP). The low-pass bands are fused by SR, while SML is used to fuse the high-pass bands. Visual and quantitative results show that the proposed method can produce fused images with better brightness contrast and retain more image details than the other compared methods.

Keywords

Sparse representation Sum-modified-Laplacian Laplacian pyramid Medical image fusion 

Notes

Funding information

This work was supported by the National Natural Science Foundation of China (grant no. 81571754) and partly supported by the Major National Scientific Instrument and Equipment Development Project (grant no. 2013YQ160551).

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflicts of interest.

References

  1. 1.
    Yang Y, Tong S, Huang S, Lin P (2014) Log-Gabor energy based multimodal medical image fusion in NSCT domain. Comput Math Methods Med 2014:835481.  https://doi.org/10.1155/2014/835481 Google Scholar
  2. 2.
    Filippi M, Rocca MA (2013) Present and future of fMRI in multiple sclerosis. Expert Rev Neurother 13(12 Suppl):27–31.  https://doi.org/10.1586/14737175.2013.865871 CrossRefGoogle Scholar
  3. 3.
    Sarikaya I (2015) PET imaging in neurology: Alzheimer’s and Parkinson’s diseases. Nucl Med Commun 36(8):775–781.  https://doi.org/10.1097/MNM.0000000000000320 CrossRefGoogle Scholar
  4. 4.
    Hutton BF (2014) The origins of SPECT and SPECT/CT. Eur J Nucl Med Mol Imaging 41(Suppl 1):S3–S16.  https://doi.org/10.1007/s00259-013-2606-5 CrossRefGoogle Scholar
  5. 5.
    Mühlenweg M, Schaefers G, Trattnig S (2015) Physical interactions in MRI: some rules of thumb for their reduction. Radiologe 55(8):638–648.  https://doi.org/10.1007/s00117-015-2812-1 CrossRefGoogle Scholar
  6. 6.
    Diaconis JN, Rao KC (1980) CT in head trauma: a review. J Comput Tomogr 4(4):261–270CrossRefGoogle Scholar
  7. 7.
    Schellpfeffer MA (2013) Ultrasound imaging in research and clinical medicine. Birth Defects Res C Embryo Today 99(2):83–92.  https://doi.org/10.1002/bdrc.21032 CrossRefGoogle Scholar
  8. 8.
    Kim T, Rivara FP, Mozingo DW, Lottenberg L, Harris ZB, Casella G, Liu H, Moldawer LL, Efron PA, Ang DN (2012) A regionalised strategy for improving motor vehicle-related highway driver deaths using a weighted averages method. Inj Prev 18(1):16–21.  https://doi.org/10.1136/ip.2010.030759 CrossRefGoogle Scholar
  9. 9.
    Sainani KL (2014) Introduction to principal components analysis. PM R 6(3):275–278.  https://doi.org/10.1016/j.pmrj.2014.02.001 CrossRefGoogle Scholar
  10. 10.
    Gloi AM, Buchanan R (2013) Dosimetric assessment of prostate cancer patients through principal component analysis (PCA). J Appl Clin Med Phys 14(1):3882–3849.  https://doi.org/10.1120/jacmp.v14i1.3882 CrossRefGoogle Scholar
  11. 11.
    Vollnhals F, Audinot JN, Wirtz T, Mercier-Bonin M, Fourquaux I, Schroeppel B, Kraushaar U, Lev-Ram V, Ellisman MH, Eswara S (2017) Correlative microscopy combining secondary ion mass spectrometry and electron microscopy: comparison of intensity-hue-saturation and Laplacian pyramid methods for image fusion. Anal Chem 89(20):10702–10710.  https://doi.org/10.1021/acs.analchem.7b01256 CrossRefGoogle Scholar
  12. 12.
    Yang Y, Zheng W, Huang S (2014) Effective multifocus image fusion based on HVS and BP neural network. ScientificWorldJournal 2014:281073.  https://doi.org/10.1155/2014/281073 Google Scholar
  13. 13.
    Shuaiqi L, Jie Z, Mingzhu S (2015) Medical image fusion based on rolling guidance filter and spiking cortical model. Comput Math Methods Med 2015:156043.  https://doi.org/10.1155/2015/156043 CrossRefGoogle Scholar
  14. 14.
    Toet A (1989) Image fusion by a ratio of low pass pyramid. Pattern Recogn Lett 9(4):245–253CrossRefGoogle Scholar
  15. 15.
    Burt P, Adelson E (1987) The Laplacian pyramid as a compact image code. Read Comput Vision 31(4):671–679.  https://doi.org/10.1109/TCOM.1983.1095851 Google Scholar
  16. 16.
    Petrovic V, Xydeas C (2004) Gradient-based multiresolution image fusion. IEEE Trans Image Process 13(2):228–237CrossRefGoogle Scholar
  17. 17.
    Jmail N, Zaghdoud M, Hadriche A, Frikha T, Ben Amar C, Bénar C (2018) Integration of stationary wavelet transform on a dynamic partial reconfiguration for recognition of pre-ictal gamma oscillations. Heliyon 4(2):e00530.  https://doi.org/10.1016/j.heliyon.2018.e00530 CrossRefGoogle Scholar
  18. 18.
    Li H, Manjunath B, Mitra S (1995) Multisensor image fusion using the wavelet transform. Graphical Models Image Proc 57(3):235–245CrossRefGoogle Scholar
  19. 19.
    Lewis J, OCallaghan R, Nikolov S, Bull D, Canagarajah N (2007) Pixel- and region-based image fusion with complex wavelets. Inform Fusion 8(2):119–130CrossRefGoogle Scholar
  20. 20.
    Nencini F, Garzelli A, Baronti S, Alparone L (2007) Remote sensing image fusion using the curvelet transform. Inform Fusion 8(2):143–156CrossRefGoogle Scholar
  21. 21.
    Petrović VS, Xydeas CS (2004) Gradient-based multiresolution image fusion. IEEE Trans Image Process 13(2):228–237CrossRefGoogle Scholar
  22. 22.
    Venkataraman A, Alirezaie J, Babyn P, Ahmadian A (2014) Multi dose computed tomography image fusion based on hybrid sparse methodology. Conf Proc IEEE Eng Med Biol Soc 2014:3901–3904.  https://doi.org/10.1109/EMBC.2014.6944476 Google Scholar
  23. 23.
    Sun J, Han Q, Kou L, Zhang L, Zhang K, Jin Z (2018) Multi-focus image fusion algorithm based on Laplacian pyramids. J Opt Soc Am A Opt Image Sci Vis 35(3):480–490.  https://doi.org/10.1364/JOSAA.35.000480 CrossRefGoogle Scholar
  24. 24.
    Zhang J, Zhao D, Gao W (2014) Group-based sparse representation for image restoration. IEEE Trans Image Process 23(8):3336–3351.  https://doi.org/10.1109/TIP.2014.2323127 CrossRefGoogle Scholar
  25. 25.
    Ptucha R, Savakis AE (2014) LGE-KSVD: robust sparse representation classification. IEEE Trans Image Process 23(4):1737–1750.  https://doi.org/10.1109/TIP.2014.2303648. CrossRefGoogle Scholar
  26. 26.
    Chen L, Li J, Chen CL (2013) Regional multifocus image fusion using sparse representation. Opt Express 21(4):5182–5197.  https://doi.org/10.1364/OE.21.005182 CrossRefGoogle Scholar
  27. 27.
    Lan X, Ma AJ, Yuen PC, Chellappa R (2015) Joint sparse representation and robust feature-level fusion for multi-cue visual tracking. IEEE Trans Image Process 24(12):5826–5841.  https://doi.org/10.1109/TIP.2015.2481325 CrossRefGoogle Scholar
  28. 28.
    Wu G, Chen Y, Wang Y, Yu J, Lv X, Ju X, Shi Z, Chen L, Chen Z (2018) Sparse representation-based radiomics for the diagnosis of brain tumors. IEEE Trans Med Imaging 37(4):893–905.  https://doi.org/10.1109/TMI.2017.2776967 CrossRefGoogle Scholar
  29. 29.
    Qiu C, Wang Y, Zhang H, Xia S (2017) Image fusion of CT and MR with sparse representation in NSST domain. Comput Math Methods Med 2017:9308745.  https://doi.org/10.1155/2017/9308745 CrossRefGoogle Scholar
  30. 30.
    Yang Y, Tong S, Huang S, Lin P (2014) Dual-tree complex wavelet transform and image block residual-based multi-focus image fusion in visual sensor networks. Sensors (Basel) 14(12):22408–22430.  https://doi.org/10.3390/s141222408. CrossRefGoogle Scholar
  31. 31.
    Liu Y, Liu S, Wang Z (2015) A general framework for image fusion based on multi-scale transform and sparse representation. Inform Fusion 24:147–164CrossRefGoogle Scholar
  32. 32.
    Yang C, Zhang JQ, Wang XR, Liu X (2008) A novel similarity based quality metric for image fusion. Inform Fusion 9:156–160CrossRefGoogle Scholar
  33. 33.
    Wang Z, Bovik A, Sheikh H, Simoncelli E (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612CrossRefGoogle Scholar
  34. 34.
    Atlas, the Whole Brain. http://www.med.harvard.edu/aanlib/home.html. Accessed 12 Aug 2019

Copyright information

© International Federation for Medical and Biological Engineering 2019

Authors and Affiliations

  1. 1.Department of Biomedical Engineering, School of Life Science and Technology, Ministry of Education Key Laboratory of Molecular BiophysicsHuazhong University of Science and TechnologyWuhanChina

Personalised recommendations