Brain CT and MRI medical image fusion using convolutional neural networks and a dual-channel spiking cortical model
- 307 Downloads
The aim of medical image fusion is to improve the clinical diagnosis accuracy, so the fused image is generated by preserving salient features and details of the source images. This paper designs a novel fusion scheme for CT and MRI medical images based on convolutional neural networks (CNNs) and a dual-channel spiking cortical model (DCSCM). Firstly, non-subsampled shearlet transform (NSST) is utilized to decompose the source image into a low-frequency coefficient and a series of high-frequency coefficients. Secondly, the low-frequency coefficient is fused by the CNN framework, where weight map is generated by a series of feature maps and an adaptive selection rule, and then the high-frequency coefficients are fused by DCSCM, where the modified average gradient of the high-frequency coefficients is adopted as the input stimulus of DCSCM. Finally, the fused image is reconstructed by inverse NSST. Experimental results indicate that the proposed scheme performs well in both subjective visual performance and objective evaluation and has superiorities in detail retention and visual effect over other current typical ones.
KeywordsImage fusion Non-subsampled shearlet transform Convolutional neural networks Dual-channel spiking cortical model
The authors thank the editors and the anonymous reviewers for their careful works and valuable suggestions for this study.
This work was supported by the National Natural Science Foundation of China under Grants 61463052 and 61365001 and Yunnan Province University Key Laboratory Construction Plan Funding, China.
Compliance with ethical standards
Conflict of interest
The authors declare that they have no conflict of interests.
- 1.Y Liu, X Chen, J Cheng et al. A medical image fusion method based on convolutional neural networks, International Conference on Information Fusion. IEEE, 1–7(2017)Google Scholar
- 4.Vijayarajan R, Muttan S (2015) Discrete wavelet transform based principal component averaging fusion for medical images. AEU-Int J Electron C 69(6):896–902Google Scholar
- 8.Zhang T, Zhou Q, Feng H et al (2007) Fusion of infrared and visible light images based on nonsubsampled shearlet transform. Int J Infrared Millimeter Waves 26(6):476–480Google Scholar
- 10.Jin X, Nie RC, Zhou DM et al (2016) Multifocus color image fusion based on NSST and PCNN. J Sens 8359602. https://doi.org/10.1155/2016/8359602
- 13.Liu Y, Chen X, Wang Z et al (2018) Deep learning for pixel-level image fusion: recent advances and future prospects. Inform Fusion 42:158–173Google Scholar
- 14.Liu Y, Chen X, Peng H et al (2017) Multi-focus image fusion with a deep convolutional neural network. Inform Fusion 36:191–207Google Scholar
- 15.Prabhakar KR, Srikar VS, Babu RV (2017) DeepFuse: a deep unsupervised approach for exposure fusion with extreme exposure image Pairs. IEEE International Conference on Computer Vision. IEEE Computer Society, 4724–4732Google Scholar
- 16.Ide H, Kurita T (2017) Improvement of learning for CNN with ReLU activation by sparse regularization. International Joint Conference on Neural Networks IEEE, 2684–2691Google Scholar
- 17.Gatys LA, Ecker AS, Bethge M (2016) Image style transfer using convolutional neural networks. IEEE Conference on Computer Vision and Pattern Recognition. IEEE Computer Society, 2414–2423Google Scholar
- 18.Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. Comput SciGoogle Scholar
- 23.Huang Z, Ding M, Zhang X (2017) Medical image fusion based on non-subsampled shearlet transform and spiking cortical model. J Med Imag Health In 7.1:229–234Google Scholar
- 24.Kim M, Han DK, Ko H (2010) Joint patch clustering-based dictionary learning for multimodal image fusion. ACM T Sensor Network 6.3:20Google Scholar
- 25.Li S, Kang X, Hu J (2013) Image fusion with guided filtering. IEEE Trans Image Process 22.7:2864–2875Google Scholar
- 28.Vladimir P, Costas X (2004) Evaluation of image fusion performance with visible differences. 8th European Conference on Computer Vision, ECCV 2004, Lecture Notes in Computer Science, 3023: 380–391Google Scholar