Skip to main content
Log in

A Novel MRI and PET Image Fusion in the NSST Domain Using YUV Color Space Based on Convolutional Neural Networks

  • Published:
Wireless Personal Communications Aims and scope Submit manuscript

Abstract

Medical image fusion techniques combine medical images from various modalities to enhance the accuracy and reliability of medical diagnoses, and they are becoming increasingly significant in a variety of clinical applications. A novel CNN-based MRI and PET image fusion in the NSST domain is proposed in this research. The PET image is first converted to YUV color space. Then, the MRI and the Y component of the PET are fed to CNN to produce a weight map. The obtained weight map and the MRI and Y component of PET are decomposed using NSST. The decomposed bands are then merged based on local similarity-based fusion criteria. By applying inverse NSST, the fused image is produced. The original U and V components are combined with the fused image to get the result. The objective analysis and visual quality assessment proved that the results obtained are superior to those obtained by conventional fusion algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Data Availability

The images used for this study are directly downloaded from the Whole Brain Atlas database (http://www.med.harvard.edu/aanlib/home.html). The authors confirm that the data supporting the findings of this study are available within the articles [7, 13], and their supplementary materials.

Code Availability

The authors have utilized the MATLAB codes publicly available in Yu Liu’s GitHub repository (https://github.com/yuliu316316?tab=repositories).

References

  1. Kong, Weiwei. (2022). Multimodal medical image fusion using convolutional neural network and extreme learning machine. Frontiers in Neurorobotics. https://doi.org/10.3389/fnbot.2022.1050981

    Article  Google Scholar 

  2. Sebastian, J., & King, G. R. (2022). Comparative analysis and fusion of MRI and PET images based on wavelets for clinical diagnosis. International Journal of Electronics and Telecommunication, 68(4), 867–873.

    Google Scholar 

  3. Liu, Y., Chen, X., Hu, P., & Wang, Zengfu. (2017). Multi-focus image fusion with a deep convolutional neural network. Information Fusion, 36, 191–207.

    Article  Google Scholar 

  4. Zhang, H., Han, X., Tian, X., Jiang, J., & Ma, Jiayi. (2021). Image fusion meets deep learning : A survey and perspective. Information Fusion, 76, 323–336.

    Article  Google Scholar 

  5. Li, Y., Zhao, J., Lv, Z., & Li, J. (2021). Medical image fusion method by deep learning. International Journal of Cognitive Computing in Engineering, 2, 21–29.

    Article  Google Scholar 

  6. Sebastian, J., King, GR. (2021). Fusion of multimodality medical images- a review. In Smart technologies, communication and robotics (STCR), 357–362.

  7. Liu, Y., Chen, X., Cheng, J., & Peng, H. (2017). A medical image fusion method based on convolutional neural networks. In 20th International conference on information fusion, fusion 2017 - proceedings, pp. 18–24.

  8. Wang, K., Zheng, M., Wei, H., Qi, G., & Li, Y. (2020). Multi-modality medical image fusion using convolutional neural network and contrast pyramid. Sensors (Switzerland), 20(8), 1–17.

    Article  Google Scholar 

  9. Piao, J., Chen, Y., & Shin, H. (2019). A new deep learning based multi-spectral image fusion method. Entropy MDPI, 21, 570.

    Article  Google Scholar 

  10. Hermessi, H., Mourali, O., & Zagrouba, E. (2018). Convolutional neural network-based multimodal image fusion via similarity learning in the shearlet domain. Neural Computing and Applications, 30(7), 2029–2045.

    Article  Google Scholar 

  11. Kaur, M., & Singh, D. (2020). Multimodality medical image fusion technique using multiobjective differential evolution based deep neural networks. Journal of Ambient Intelligence and Humanized Computing, 12(24), 2483–2493.

    Google Scholar 

  12. Liu, S., Wang, M., Yin, L., Sun, X., & Zhang, Y. (2022). Two-scale multimodal medical image fusion based on structure preservation. Frontiers in Computational Neuroscience, 15, 1–14.

    Article  Google Scholar 

  13. Ding, Z., Zhou, D., Nie, R., Hou, R., & Liu, Y. (2020). Brain medical image fusion based on dual-branch CNNs in NSST domain. BioMed Research International, 2020(13), 15.

    Google Scholar 

  14. Ouerghi, H., Mourali, O., & Zagrouba, E. (2018). Non-subsampled shearlet transform based MRI and PET brain image fusion using simplified pulse coupled neural network and weight local features in YIQ colour space. IET Image Processing, 12(10), 1873–1880.

    Article  Google Scholar 

  15. P, Michal. (2014). YUV vs RGB - choosing a color space for human-machine interaction., 3, 29–34.

  16. Bagher, M., Haghighat, A., Aghagolzadeh, A., & Seyedarabi, H. (2011). A non-reference image fusion metric based on mutual information. Computers and Electrical Engineering, 37(5), 744–756.

    Article  MATH  Google Scholar 

  17. P, Gemma., & Heijmans, H. (2003). A new quality metric for image fusion. In International conference on image processing, pp. 173–176.

  18. Xydeas, C. S., & Petrovic, V. (2000). Objective image fusion performance measure. Electronics Letters, 36(4), 308–309.

  19. Han, Y., Cai, Y., Cao, Y., & Xiaoming, Xu. (2013). A new image fusion performance metric based on visual information fidelity. Information Fusion, 14(2), 127–135.

    Article  Google Scholar 

  20. Jagalingam, P., & Hegde, A. V. (2015). A review of quality metrics for fused image. Aquatic Procedia, 4, 133–142.

    Article  Google Scholar 

  21. Li, S., Kang, X., Member, S., & Jianwen, H. (2013). Image fusion with guided filtering. IEEE Transactions on Image Processing, 22(7), 2864–2875.

    Article  Google Scholar 

  22. Liu, X., Mei, W., & Huiqian, D. (2018). Multi-modality medical image fusion based on image decomposition framework and nonsubsampled shearlet transform. Biomedical Signal Processing and Control, 40(6), 343–350.

    Article  Google Scholar 

Download references

Funding

The authors declare that no funds, grants, or other support were received during the preparation of this manuscript.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the study conception, design, material preparation, data collection, and analysis. The first draft of the manuscript was written by JS and GK commented on previous versions of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Jinu Sebastian.

Ethics declarations

Conflict of interest

The authors have no relevant financial or non-financial interests to disclose.

Ethical Approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sebastian, J., King, G.R.G. A Novel MRI and PET Image Fusion in the NSST Domain Using YUV Color Space Based on Convolutional Neural Networks. Wireless Pers Commun 131, 2295–2309 (2023). https://doi.org/10.1007/s11277-023-10542-w

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11277-023-10542-w

Keywords

Navigation