Multimedia Tools and Applications

, Volume 78, Issue 1, pp 929–945 | Cite as

An improved coupled dictionary and multi-norm constraint fusion method for CT/MR medical images

  • Lifang WangEmail author
  • Xia Dong
  • Xi Cheng
  • Suzhen Lin


To solve the problems that a single dictionary is difficult to obtain accurate sparse representation of images, and a single norm as activity level measurement of the source image block does not preserve more details of the image, leading to poor image fusion results, this paper proposes an improved coupled dictionary and multi-norm constraint image fusion method for CT/MR images. In the paper, CT and MR image pairs are used as training set, and the coupled CT dictionary and the MR dictionary are obtained by using the improved K-SVD algorithm respectively. The fusion dictionary is obtained by combining coupled CT dictionary and MR dictionary with the spatial domain method. First, the registered source images are compiled into the column vectors and the means are removed. The exact sparse representation coefficients are calculated by the CoefROMP algorithm under the fusion dictionary. Then the multi-norm constraint of the sparse representation coefficients is taken as activity level measurement of the source image blocks, and the sparse representation coefficients are fused by the rule of “choosing the maximum”. Finally, the fused images are obtained by reconstruction. The experimental results show that the proposed method in this paper can effectively retain more image details, improve fusion image contrast and clarity, focal prominent, accelerate the running speed of the algorithm and be applied to clinical diagnosis and auxiliary treatment.


CT/MR Image fusion Improved coupled dictionary Multi-norm constraint 


  1. 1.
    Aharon M, Elad M, Bruckstein A (2006) K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans Signal Process 54(11):4311–4322CrossRefGoogle Scholar
  2. 2.
    Angulo JM, Esquivel FJ (2015) Multifractal dimensional dependence assessment based on Tsallis mutual information. Entropy 17(8):5382–5401CrossRefGoogle Scholar
  3. 3.
    Arras B, Swan Y (2018) IT formulae for gamma target: mutual information and relative entropy. IEEE Trans Inf Theory 64(2):1083–1091MathSciNetCrossRefGoogle Scholar
  4. 4.
    Aslantas V, Bendes E (2015) A new image quality metric for image fusion: the sum of the correlations of differences. Aeu-international Journal of electronics and communications 69(12):160–166CrossRefGoogle Scholar
  5. 5.
    Bryt O, Elad M (2008) Compression of facial images using the K-SVD algorithm. J Vis Commun Image Represent 19(4):270–282CrossRefGoogle Scholar
  6. 6.
    Deng H, Zhang DD, Wang TY, Ji KF, Wang F, Liu Z, Xiang YY, Jin ZY, Cao WD (2015) Objective image-quality assessment for high-resolution Photospheric images by median filter-gradient similarity. Sol Phys 290(5):1479–1489CrossRefGoogle Scholar
  7. 7.
    GL Duan WXH, Wang JR (2016) Research on the natural image super-resolution reconstruction algorithm based on compressive perception theory and deep learning model. Neurocomputing 208:117–126CrossRefGoogle Scholar
  8. 8.
    Herskovits J, Freire WP, Fo MT, Canelas A (2011) A feasible directions method for nonsmooth convex optimization. Struct Multidiscip Optim 44(3):363–377MathSciNetCrossRefGoogle Scholar
  9. 9.
    Yang JC, Wang ZW, Lin Z et al (2012) Coupled dictionary training for image super-resolution. IEEE Transactions on Image Processing a Publication of the IEEE Signal Processing Society 21(8):3467–3478MathSciNetCrossRefGoogle Scholar
  10. 10.
    Kafashan M, Beygi S, Bahrami HR, Mugler DH (2013) Blind structural similarity estimation of digital images using quantized discrete cosine transform coefficients. Meas Sci Technol 24(7):1–7CrossRefGoogle Scholar
  11. 11.
    Li WT, Song RJ (2018) A composite objective metric and its application to multi-focus image fusion. Aeu-international Journal of Electronics and communications 71:125–130CrossRefGoogle Scholar
  12. 12.
    Lian QS, Shi BS, Chen SZ (2015) Research advances on dictionary learning models, algorithms and applications. Acta Automat Sin 41(2):240–260Google Scholar
  13. 13.
    Liu HY, Bhattacharya P (2007) Stereo matching using the discrete wavelet transform. Int J Wavelets Multiresolution Inf Process 5(7):567–588MathSciNetCrossRefGoogle Scholar
  14. 14.
    Liu CH, Qi Y, Ding WR (2017) Infrared and visible image fusion method based on saliency detection in sparse domain. Infrared Phys Technol 83:94–102CrossRefGoogle Scholar
  15. 15.
    Madheswari K, Venkateswaran N (2017) Swarm intelligence based optimisation in thermal image fusion using dual tree discrete wavelet transform. Quantitative Infrared Thermography Journal 14(1):24–43CrossRefGoogle Scholar
  16. 16.
    Mairal J, Sapiro G, Elad M (2008) Learning multiscale sparse representations for image and video restoration. Multiscale Modeling & Simulation 7(1):214–241MathSciNetCrossRefGoogle Scholar
  17. 17.
    Marcello J, Medina A, Eugenio F (2013) Evaluation of spatial and spectral effectiveness of pixel-level fusion. IEEE Geosci Remote Sens Lett 10(3):432–436CrossRefGoogle Scholar
  18. 18.
    Mehra I, Nishchal NK (2015) Wavelet-based image fusion for securing multiple images through asymmetric keys. Opt Commun 335:153–160CrossRefGoogle Scholar
  19. 19.
    Ophir B, Lustig M, Elad M (2011) Multi-scale dictionary learning using wavelet. Selected Topics in Signal Processing IEEE Journal 5(5):1014–1024CrossRefGoogle Scholar
  20. 20.
    Smith LN, Olson CC, Judd KP, Nichols JM (2012) Denoising infrared maritime imagery using tailored dictionaries via modified K-SVD algorithm. Appl Opt 51(17):3941–3949CrossRefGoogle Scholar
  21. 21.
    The Whole Brain Atlas [EB/OL] (2017).
  22. 22.
    Wang JJ, Li Q, Jia ZH, Kasabov N, Yang J (2015) A novel multi-focus image fusion method using PCNN in nonsubsampled contourlet transform domain. Optik 126(20):2508–2511CrossRefGoogle Scholar
  23. 23.
    Xing L, Cai L, Zeng HQ, Chen J, Zhu JQ, Hou JH (2018) A multi-scale contrast-based image quality assessment model for multi-exposure image fusion. Signal Process 145:233–240CrossRefGoogle Scholar
  24. 24.
    YANG B, LI S (2010) Multifocus image fusion and restoration with sparse representation. IEEE Trans Instrum Meas 59(4):884–892CrossRefGoogle Scholar
  25. 25.
    Yin HT, Li ST, Fang LY (2013) Simultaneous imagefusion and super- fusion and super-resolution using sparse representation. Information Fusion 14(3):229–240CrossRefGoogle Scholar
  26. 26.
    Yu NN, Qiu TS, Bi F, Wang AQ (2011) Image features extraction and fusion based on joint sparse representation. IEEE Journal of Selected Topics in Signal Processing 5(5):1074–1082CrossRefGoogle Scholar
  27. 27.
    Zhen XT, Shao L (2013) A local descriptor based on Laplacian pyramid coding for action recognition. Pattern Recogn Lett 34(15):1899–1905CrossRefGoogle Scholar
  28. 28.
    Zhu ZQ, Chai Y, Yin HP, Li YX, Liu ZD (2016) A novel dictionary learning approach for multi-modality medical image fusion. Neurocomputing 214:471–482CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.School of Data Science and TechnologyNorth University of ChinaTaiyuanChina

Personalised recommendations