Advertisement

FPRSGF denoised non-subsampled shearlet transform-based image fusion using sparse representation

  • Sonal GoyalEmail author
  • Vijander Singh
  • Asha Rani
  • Navdeep Yadav
Original Paper
  • 25 Downloads

Abstract

In this work, multiscale decomposition and sparse representation-based multimodal medical image fusion technique is proposed. An efficient denoising technique, feature-preserving regularized Savitzky–Golay filter is applied to obtain noise-free images. The filtered medical images are split into low- and high-pass subbands by non-subsampled shearlet transform (NSST). The sparse coefficient vectors of low-pass subbands are obtained from a pre-learned dictionary, and “max-L1” rule is applied to obtain the fused low-pass subband. However, high-pass subbands are fused using “max-absolute” rule. Lastly, NSST reconstruction is applied to generate the fused multimodal medical image. The non-subsampled contourlet transform, NSST-based fusion using parameter adaptive pulse coupled neural network and phase congruency techniques are also realized for comparative analysis. Multiple experiments on clean and noisy sets are performed for gray and color medical images. The fusion techniques are also tested on infrared–visible image pairs. The visual and quantitative outcomes verify that suggested technique outperforms the state-of-the-art fusion techniques.

Keywords

Multimodal medical image fusion Feature-preserving regularized Savitzky–Golay filter Sparse representation Non-subsampled shearlet transform 

Notes

References

  1. 1.
    Li, H., He, X., Tao, D., Tang, Y., Wang, R.: Joint medical image fusion, denoising and enhancement via discriminative low-rank sparse dictionaries learning. Pattern Recognit. 79, 130–146 (2018)CrossRefGoogle Scholar
  2. 2.
    Olshausen, B.A., Field, D.J.: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381(6583), 607 (1996)CrossRefGoogle Scholar
  3. 3.
    Mairal, J., Elad, M., Sapiro, G.: Sparse representation for color image restoration. IEEE Trans. Image Process. 17(1), 53–69 (2007)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Elad, M., Aharon, M.: Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 15(12), 3736–3745 (2006)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Dong, W., Zhang, L., Shi, G., Wu, X.: Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization. IEEE Trans. Image Process. 20(7), 1838–1857 (2011)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Yu, S., Ou, W., You, X., Mou, Y., Jiang, X., Tang, Y.: Single image rain streaks removal based on self-learning and structured sparse representation. In: 2015 IEEE China Summit and International Conference on Signal and Information Processing (ChinaSIP), IEEE, pp. 215–219 (2015)Google Scholar
  7. 7.
    He, Z., Yi, S., Cheung, Y.M., You, X., Tang, Y.Y.: Robust object tracking via key patch sparse representation. IEEE Trans. Cybern. 47(2), 354–364 (2016)Google Scholar
  8. 8.
    Chen, Z., You, X., Zhong, B., Li, J., Tao, D.: Dynamically modulated mask sparse tracking. IEEE Trans. Cybern. 47(11), 3706–3718 (2016)CrossRefGoogle Scholar
  9. 9.
    Zhan, K., Li, Q., Teng, J., Wang, M., Shi, J.: Multifocus image fusion using phase congruency. J. Electron. Imaging 24(3), 033014 (2015)CrossRefGoogle Scholar
  10. 10.
    Yang, B., Li, S.: Multifocus image fusion and restoration with sparse representation. IEEE Trans. Instrum. Meas. 59(4), 884–892 (2009)CrossRefGoogle Scholar
  11. 11.
    Yin, M., Liu, X., Liu, Y., Chen, X.: Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain. IEEE Trans. Instrum. Meas. 99, 1–16 (2018)Google Scholar
  12. 12.
    Yang, B., Li, S.: Pixel-level image fusion with simultaneous orthogonal matching pursuit. Inf. Fusion 13(1), 10–19 (2012)CrossRefGoogle Scholar
  13. 13.
    Wang, K., Qi, G., Zhu, Z., Chai, Y.: A novel geometric dictionary construction approach for sparse representation based image fusion. Entropy 19(7), 306 (2017)CrossRefGoogle Scholar
  14. 14.
    Liu, Y., Liu, S., Wang, Z.: A general framework for image fusion based on multi-scale transform and sparse representation. Inf. Fusion 24, 147–164 (2015)CrossRefGoogle Scholar
  15. 15.
    Agarwal, S., Rani, A., Singh, V., Mittal, A.P.: Performance evaluation and implementation of fpga based sgsf in smart diagnostic applications. J. Med. Syst. 40(3), 63 (2016)CrossRefGoogle Scholar
  16. 16.
    Agarwal, S., Rani, A., Singh, V., Mittal, A.P.: EEG signal enhancement using cascaded S-Golay filter. Biomed. Signal Process. Control 36, 194–204 (2017)CrossRefGoogle Scholar
  17. 17.
    Knoll, F., Bredies, K., Pock, T., Stollberger, R.: Second order total generalized variation (TGV) for MRI. Magn. Reson. Med. 65(2), 480–491 (2011)CrossRefGoogle Scholar
  18. 18.
    Ou, W., You, X., Cheung, Ym., Peng, Q., Gong, M., Jiang, X.: Structured sparse coding for image representation based on l 1-graph. In: Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), IEEE, pp. 3220–3223 (2012)Google Scholar
  19. 19.
    The whole brain atlas. http://www.med.harvard.edu/aanlib/. Accessed 30 Mar 2019
  20. 20.
    Bhatnagar, G., Wu, Q.J., Liu, Z.: Directive contrast based multimodal medical image fusion in nsct domain. IEEE Trans. Multimedia 15(5), 1014–1024 (2013)CrossRefGoogle Scholar
  21. 21.
    Liu, Y., Wang, Z.: Simultaneous image fusion and denoising with adaptive sparse representation. IET Image Proc. 9(5), 347–357 (2014)CrossRefGoogle Scholar
  22. 22.
    Cvejic, N., Canagarajah, C., Bull, D.: Image fusion metric based on mutual information and Tsallis entropy. Electron. Lett. 42(11), 626–627 (2006)CrossRefGoogle Scholar
  23. 23.
    Xydeas, C., Petrovic, V.: Objective image fusion performance measure. Electron. Lett. 36(4), 308–309 (2000)CrossRefGoogle Scholar
  24. 24.
    Zhao, J., Laganiere, R., Liu, Z.: Performance assessment of combinative pixel-level image fusion based on an absolute feature measurement. Int. J. Innov. Comput. Inf. Control 3(6), 1433–1447 (2007)Google Scholar
  25. 25.
    Ruderman, D.L., Cronin, T.W., Chiao, C.C.: Statistics of cone responses to natural images: implications for visual coding. JOSA A 15(8), 2036–2045 (1998)CrossRefGoogle Scholar
  26. 26.
    James, A.P., Dasarathy, B.V.: Medical image fusion: a survey of the state of the art. Inf. Fusion 19, 4–19 (2014)CrossRefGoogle Scholar
  27. 27.
    Tno’s image fusion database. http://www.imagefusion.org/. Accessed 10 Sept 2019

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2019

Authors and Affiliations

  1. 1.Netaji Subhas institute of technologyUniversity of DelhiNew DelhiIndia

Personalised recommendations