Skip to main content
Log in

A novel improved deep convolutional neural network model for medical image fusion

  • Published:
Cluster Computing Aims and scope Submit manuscript

Abstract

This paper proposed a novel fusion scheme for muti-modal medical images that utilizes both the features of the multi-scale transformation and deep convolutional neural network. Firstly, the source images are decomposed by the Gauss-Laplace filter and Gaussian filter into several sub-images in the first layer of network. Then, HeK-based method is used to initialize the convolution kernel of the rest layers, construct the basic unit, and use the back propagation algorithm to train the basic unit; Train multiple basic units that are sacked with the thought of SAE to get the deep stacked neural network; the proposed network is adopted to decompose the input images to obtain their own high frequency and low frequency images, and combine the our fusion rule to fuse the two high frequency and low frequency images, and put them back to the last layer of the network to get the final fusion images. The performance of our proposed fusion method is evaluated by conducting several experiments on the different medical image datasets. Experimental results demonstrate that our proposed method does not only produce better results by successfully fusing the different images, but also ensures an improvement in the various quantitative parameters as compared to other existing methods. In addition, the speed of our improved CNN method is much faster than that of comparison algorithms which have good fusion quality.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Shu-Tao, Li, Xu-Dong, Kang, Leyuan, Fang, et al.: Pixel-level image Lusion; a survey Fusion of the state of the art. Inf. Fusion 33, 100–112 (2017)

    Article  Google Scholar 

  2. Li, H., Manjunath, B., Multisensor, S.M.: Image Fsion using for the wavelet transform. Graph Models Image Process 57(3), 235–245 (1995)

    Article  Google Scholar 

  3. Lewis, J.J., Callaghan, R.J., Nikolov, S.G., et al.: Pixel-and region-based image fusion with complex wavelets. Inf. Fusion 8(2), 119–130 (2007)

    Article  Google Scholar 

  4. Liu, Y., Liu, S., Wang, Z.: A general framework for image fusion based on multrseale transform and sparse representation. Inf. Fusion 24, 147–164 (2015)

    Article  Google Scholar 

  5. Easley, U., Labate, D., Lim, W.Q.: Sparse directional image representations using the discrete shearlet transform. Appl. Comput. Harmon. Anal. 25(1), 25–46 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  6. Li, S., Yang, B., Hu, J.: Performance comparison of different mufti-resolution transforms for image fusion. Inf. Fusion 12(2), 74–84 (2011)

    Article  Google Scholar 

  7. Yan-Ming, Guo, Liu, Yu., Oerlemans, A., et al.: Deep learning for visual understanding; a review. Neurocomputing 187, 27–48 (2016)

    Article  Google Scholar 

  8. Hong, L., Fang, L., Shu-Yuan, Y., et al.: Remote sensing image fusion based on deep support value learning networks. Chin. J. Comput. 39(8), 1583–1596 (2016)

    MathSciNet  Google Scholar 

  9. Fan, L., Ze-Hua, C., Jing, C.: A new multi-Locus image fusion method based on deep neural network model. J. Shandong Univ. (Eng. Sci.) 46(3), 7–13 (2016)

    Google Scholar 

  10. Ng, W.W.Y., Zeng, U., Zhang, J., et al.: Dual autoencoders features for imbalance classification problem. Pattern Recognit. 60, 875–889 (2016)

    Article  Google Scholar 

  11. Gehring, J., Miao, Y., Metze, F., et al.: Extracting deep bottleneck features using stacked auto-encoders. In: Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. Vancouver, Canada, pp. 3377–3381 (2013)

  12. Zhao, Z., Jiao, I., Zhao, J., et al.: Discriminant deep belief network for high-resolution SAR image classification. Pattern Recognit. 61, 686–701 (2017)

    Article  Google Scholar 

  13. Krizhevsky, A., Sutskever, I., Hinton, U.E.: Imagenet classification with deep convolutional neural networks. In: Proceedings of the Advances in Neural Information Processing Systems. Nevada, USA, pp. 1097–1105 (2012)

  14. Zabalza, J., Ren, J., Zheng, J., et al.: Novel segmented stacked autoencoder for effective dimensionality reduction and feature extraction in hyperspectral imaging. Neurocomputing 185, 1–10 (2016)

    Article  Google Scholar 

  15. Jiao, L.-C., Yang, S.-Y., Liu, F., et al.: Sevent years beyond neural networks: retrospect and prospect. Chin. J. Comput. 39(8), 1697–1716 (2016)

    Google Scholar 

  16. Kingsbury, N.: A dual-tree complex wavelet transform with improved orthogonality and symmetry properties. In: Proceedings of the International Conference on Image Processing. Vancouver, Canada, vol. 12, no. 2, pp. 375–378 (2000)

  17. Do, M.N., Vetterli, M.: The contourlet transform: an efficient directional multiresolution image representation. IEEE Trans. Image Process. 14(12), 2091–2106 (2005)

    Article  Google Scholar 

  18. He, K., Zhang, X., Ren, S., et al.: Delving deep into rectifiers Surpassing humarrlevel performance on imagenet classification. In: Proceedings of the IEEE International Conference on Computer Vision. Santiago, Chile, pp. 1026–1034 (2015)

  19. Russakovsky, O., Deng, J., Su, H., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–262 (2015)

    Article  MathSciNet  Google Scholar 

  20. Bengio, Y.: Practical recommendations for gradient-based training of deep architectures. In: Proceedings of the Neural Networks: Tricks of the Trade. Berlin, Germany, pp. 437–478 (2012)

  21. Shi, J., Zhou, S., Liu, X., et al.: Stacked deep polynomial network based representation learning for tumor classification with small ultrasound image dataset. Neurocomputing 19(4), 87–94 (2016)

    Article  Google Scholar 

  22. He, K.-M., Zhang, X.-Y., Ren, S., et al.: Deep residual learning for image recognition. arXiv:151.03385 (2015)

  23. Kong, W., Wang, B., Lei, Y.: Technique for infrared and visible image fusion based on non-subsampled shearlet transform and spiking cortical model. Infrared Phys. Technol. 71, 87–98 (2015)

    Article  Google Scholar 

  24. Smith, E.P.U., Pham, I.T., Venzor, U.M., et al.: HgCdTe focal plane arrays for dual-color mid-and long-wavelength infrared detection. J. Electron. Mater. 33(6), 509–516 (2004)

    Article  Google Scholar 

  25. Jagalingam, P., Hegde, A.V.: A review of quality metrics for fused image. Aquat. Proc. 4, 133–142 (2015)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kai-jian Xia.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xia, Kj., Yin, Hs. & Wang, Jq. A novel improved deep convolutional neural network model for medical image fusion. Cluster Comput 22 (Suppl 1), 1515–1527 (2019). https://doi.org/10.1007/s10586-018-2026-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10586-018-2026-1

Keywords

Navigation