Advertisement

Medical & Biological Engineering & Computing

, Volume 57, Issue 4, pp 887–900 | Cite as

Brain CT and MRI medical image fusion using convolutional neural networks and a dual-channel spiking cortical model

  • Ruichao Hou
  • Dongming ZhouEmail author
  • Rencan Nie
  • Dong Liu
  • Xiaoli Ruan
ORIGINAL ARTICLE
  • 307 Downloads

Abstract

The aim of medical image fusion is to improve the clinical diagnosis accuracy, so the fused image is generated by preserving salient features and details of the source images. This paper designs a novel fusion scheme for CT and MRI medical images based on convolutional neural networks (CNNs) and a dual-channel spiking cortical model (DCSCM). Firstly, non-subsampled shearlet transform (NSST) is utilized to decompose the source image into a low-frequency coefficient and a series of high-frequency coefficients. Secondly, the low-frequency coefficient is fused by the CNN framework, where weight map is generated by a series of feature maps and an adaptive selection rule, and then the high-frequency coefficients are fused by DCSCM, where the modified average gradient of the high-frequency coefficients is adopted as the input stimulus of DCSCM. Finally, the fused image is reconstructed by inverse NSST. Experimental results indicate that the proposed scheme performs well in both subjective visual performance and objective evaluation and has superiorities in detail retention and visual effect over other current typical ones.

Graphical abstract

A schematic diagram of the CT and MRI medical image fusion framework using convolutional neural network and a dual-channel spiking cortical model.

Keywords

Image fusion Non-subsampled shearlet transform Convolutional neural networks Dual-channel spiking cortical model 

Notes

Acknowledgements

The authors thank the editors and the anonymous reviewers for their careful works and valuable suggestions for this study.

Funding information

This work was supported by the National Natural Science Foundation of China under Grants 61463052 and 61365001 and Yunnan Province University Key Laboratory Construction Plan Funding, China.

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interests.

References

  1. 1.
    Y Liu, X Chen, J Cheng et al. A medical image fusion method based on convolutional neural networks, International Conference on Information Fusion. IEEE, 1–7(2017)Google Scholar
  2. 2.
    Gaurav Bhatnagar QM, Wu J, Liu Z (2013) Directive contrast based multimodal medical image fusion in NSCT domain. IEEE Trans Multimedia 15(5):1014–1024CrossRefGoogle Scholar
  3. 3.
    Liu X, Mei W, Du H (2017) Structure tensor and nonsubsampled shearlet transform based algorithm for CT and MRI image fusion. Neurocomputing 235:131–139CrossRefGoogle Scholar
  4. 4.
    Vijayarajan R, Muttan S (2015) Discrete wavelet transform based principal component averaging fusion for medical images. AEU-Int J Electron C 69(6):896–902Google Scholar
  5. 5.
    Toet A (1989) Image fusion by a ratio of low-pass pyramid. Pattern Recogn Lett 9(4):245–253CrossRefGoogle Scholar
  6. 6.
    Do Minh N, Vetterli M (2005) The contourlet transform: an efficient directional multiresolution image representation. IEEE Trans Image Process 14(12):2091–2106CrossRefGoogle Scholar
  7. 7.
    Da Cunha AL, Zhou J, Do MN (2006) The nonsubsampled contourlet transform: theory, design, and applications. IEEE Trans Image Process 15(10):3089–3101CrossRefGoogle Scholar
  8. 8.
    Zhang T, Zhou Q, Feng H et al (2007) Fusion of infrared and visible light images based on nonsubsampled shearlet transform. Int J Infrared Millimeter Waves 26(6):476–480Google Scholar
  9. 9.
    Johnson JL, Padgett ML (1999) PCNN models and applications. IEEE Trans Neural Netw 17(3):480–498CrossRefGoogle Scholar
  10. 10.
    Jin X, Nie RC, Zhou DM et al (2016) Multifocus color image fusion based on NSST and PCNN. J Sens 8359602.  https://doi.org/10.1155/2016/8359602
  11. 11.
    He K, Zhou D, Zhang X et al (2018) Multi-focus image fusion combining focus-region-level partition and pulse-coupled neural network. Soft Comput 4:1–15.  https://doi.org/10.1007/s00500-018-3118-9 Google Scholar
  12. 12.
    Hou R, Nie R, Zhou D et al (2018) Infrared and visible images fusion using visual saliency and optimized spiking cortical model in non-subsampled shearlet transform domain. Multimed Tools Appl 1:1–24.  https://doi.org/10.1007/s11042-018-6099-x Google Scholar
  13. 13.
    Liu Y, Chen X, Wang Z et al (2018) Deep learning for pixel-level image fusion: recent advances and future prospects. Inform Fusion 42:158–173Google Scholar
  14. 14.
    Liu Y, Chen X, Peng H et al (2017) Multi-focus image fusion with a deep convolutional neural network. Inform Fusion 36:191–207Google Scholar
  15. 15.
    Prabhakar KR, Srikar VS, Babu RV (2017) DeepFuse: a deep unsupervised approach for exposure fusion with extreme exposure image Pairs. IEEE International Conference on Computer Vision. IEEE Computer Society, 4724–4732Google Scholar
  16. 16.
    Ide H, Kurita T (2017) Improvement of learning for CNN with ReLU activation by sparse regularization. International Joint Conference on Neural Networks IEEE, 2684–2691Google Scholar
  17. 17.
    Gatys LA, Ecker AS, Bethge M (2016) Image style transfer using convolutional neural networks. IEEE Conference on Computer Vision and Pattern Recognition. IEEE Computer Society, 2414–2423Google Scholar
  18. 18.
    Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. Comput SciGoogle Scholar
  19. 19.
    Zhan K, Zhang H, Ma Y (2009) New spiking cortical model for invariant texture retrieval and image processing. IEEE Trans Neural Netw 20(12):1980–1986CrossRefGoogle Scholar
  20. 20.
    Yoshifusa I (1991) Representation of functions by superpositions of a step or sigmoid function and their applications to neural network theory. Neural Netw 4(3):385–394CrossRefGoogle Scholar
  21. 21.
    Zhang Y (2007) Adaptive region-based image fusion using energy evaluation model for fusion decision. SIViP 1.3:215–223CrossRefGoogle Scholar
  22. 22.
    Liu X, Mei W, Du H (2016) Multimodality medical image fusion algorithm based on gradient minimization smoothing filter and pulse coupled neural network. Biomed Signal Process Control 30:140–148CrossRefGoogle Scholar
  23. 23.
    Huang Z, Ding M, Zhang X (2017) Medical image fusion based on non-subsampled shearlet transform and spiking cortical model. J Med Imag Health In 7.1:229–234Google Scholar
  24. 24.
    Kim M, Han DK, Ko H (2010) Joint patch clustering-based dictionary learning for multimodal image fusion. ACM T Sensor Network 6.3:20Google Scholar
  25. 25.
    Li S, Kang X, Hu J (2013) Image fusion with guided filtering. IEEE Trans Image Process 22.7:2864–2875Google Scholar
  26. 26.
    Hossny M, Nahavandi S, Creighton D (2008) Comments on “Information measure for performance of image fusion”. Electron Lett 44(18):1066–1067CrossRefGoogle Scholar
  27. 27.
    Wang Z, Bovik AC, Sheikh HR et al (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612CrossRefGoogle Scholar
  28. 28.
    Vladimir P, Costas X (2004) Evaluation of image fusion performance with visible differences. 8th European Conference on Computer Vision, ECCV 2004, Lecture Notes in Computer Science, 3023: 380–391Google Scholar

Copyright information

© International Federation for Medical and Biological Engineering 2018

Authors and Affiliations

  • Ruichao Hou
    • 1
  • Dongming Zhou
    • 1
    Email author
  • Rencan Nie
    • 1
  • Dong Liu
    • 1
  • Xiaoli Ruan
    • 1
  1. 1.Information CollegeYunnan UniversityKunmingChina

Personalised recommendations