Abstract
Medical image fusion techniques combine medical images from various modalities to enhance the accuracy and reliability of medical diagnoses, and they are becoming increasingly significant in a variety of clinical applications. A novel CNN-based MRI and PET image fusion in the NSST domain is proposed in this research. The PET image is first converted to YUV color space. Then, the MRI and the Y component of the PET are fed to CNN to produce a weight map. The obtained weight map and the MRI and Y component of PET are decomposed using NSST. The decomposed bands are then merged based on local similarity-based fusion criteria. By applying inverse NSST, the fused image is produced. The original U and V components are combined with the fused image to get the result. The objective analysis and visual quality assessment proved that the results obtained are superior to those obtained by conventional fusion algorithms.
Similar content being viewed by others
Data Availability
The images used for this study are directly downloaded from the Whole Brain Atlas database (http://www.med.harvard.edu/aanlib/home.html). The authors confirm that the data supporting the findings of this study are available within the articles [7, 13], and their supplementary materials.
Code Availability
The authors have utilized the MATLAB codes publicly available in Yu Liu’s GitHub repository (https://github.com/yuliu316316?tab=repositories).
References
Kong, Weiwei. (2022). Multimodal medical image fusion using convolutional neural network and extreme learning machine. Frontiers in Neurorobotics. https://doi.org/10.3389/fnbot.2022.1050981
Sebastian, J., & King, G. R. (2022). Comparative analysis and fusion of MRI and PET images based on wavelets for clinical diagnosis. International Journal of Electronics and Telecommunication, 68(4), 867–873.
Liu, Y., Chen, X., Hu, P., & Wang, Zengfu. (2017). Multi-focus image fusion with a deep convolutional neural network. Information Fusion, 36, 191–207.
Zhang, H., Han, X., Tian, X., Jiang, J., & Ma, Jiayi. (2021). Image fusion meets deep learning : A survey and perspective. Information Fusion, 76, 323–336.
Li, Y., Zhao, J., Lv, Z., & Li, J. (2021). Medical image fusion method by deep learning. International Journal of Cognitive Computing in Engineering, 2, 21–29.
Sebastian, J., King, GR. (2021). Fusion of multimodality medical images- a review. In Smart technologies, communication and robotics (STCR), 357–362.
Liu, Y., Chen, X., Cheng, J., & Peng, H. (2017). A medical image fusion method based on convolutional neural networks. In 20th International conference on information fusion, fusion 2017 - proceedings, pp. 18–24.
Wang, K., Zheng, M., Wei, H., Qi, G., & Li, Y. (2020). Multi-modality medical image fusion using convolutional neural network and contrast pyramid. Sensors (Switzerland), 20(8), 1–17.
Piao, J., Chen, Y., & Shin, H. (2019). A new deep learning based multi-spectral image fusion method. Entropy MDPI, 21, 570.
Hermessi, H., Mourali, O., & Zagrouba, E. (2018). Convolutional neural network-based multimodal image fusion via similarity learning in the shearlet domain. Neural Computing and Applications, 30(7), 2029–2045.
Kaur, M., & Singh, D. (2020). Multimodality medical image fusion technique using multiobjective differential evolution based deep neural networks. Journal of Ambient Intelligence and Humanized Computing, 12(24), 2483–2493.
Liu, S., Wang, M., Yin, L., Sun, X., & Zhang, Y. (2022). Two-scale multimodal medical image fusion based on structure preservation. Frontiers in Computational Neuroscience, 15, 1–14.
Ding, Z., Zhou, D., Nie, R., Hou, R., & Liu, Y. (2020). Brain medical image fusion based on dual-branch CNNs in NSST domain. BioMed Research International, 2020(13), 15.
Ouerghi, H., Mourali, O., & Zagrouba, E. (2018). Non-subsampled shearlet transform based MRI and PET brain image fusion using simplified pulse coupled neural network and weight local features in YIQ colour space. IET Image Processing, 12(10), 1873–1880.
P, Michal. (2014). YUV vs RGB - choosing a color space for human-machine interaction., 3, 29–34.
Bagher, M., Haghighat, A., Aghagolzadeh, A., & Seyedarabi, H. (2011). A non-reference image fusion metric based on mutual information. Computers and Electrical Engineering, 37(5), 744–756.
P, Gemma., & Heijmans, H. (2003). A new quality metric for image fusion. In International conference on image processing, pp. 173–176.
Xydeas, C. S., & Petrovic, V. (2000). Objective image fusion performance measure. Electronics Letters, 36(4), 308–309.
Han, Y., Cai, Y., Cao, Y., & Xiaoming, Xu. (2013). A new image fusion performance metric based on visual information fidelity. Information Fusion, 14(2), 127–135.
Jagalingam, P., & Hegde, A. V. (2015). A review of quality metrics for fused image. Aquatic Procedia, 4, 133–142.
Li, S., Kang, X., Member, S., & Jianwen, H. (2013). Image fusion with guided filtering. IEEE Transactions on Image Processing, 22(7), 2864–2875.
Liu, X., Mei, W., & Huiqian, D. (2018). Multi-modality medical image fusion based on image decomposition framework and nonsubsampled shearlet transform. Biomedical Signal Processing and Control, 40(6), 343–350.
Funding
The authors declare that no funds, grants, or other support were received during the preparation of this manuscript.
Author information
Authors and Affiliations
Contributions
All authors contributed to the study conception, design, material preparation, data collection, and analysis. The first draft of the manuscript was written by JS and GK commented on previous versions of the manuscript. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors have no relevant financial or non-financial interests to disclose.
Ethical Approval
This article does not contain any studies with human participants or animals performed by any of the authors.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Sebastian, J., King, G.R.G. A Novel MRI and PET Image Fusion in the NSST Domain Using YUV Color Space Based on Convolutional Neural Networks. Wireless Pers Commun 131, 2295–2309 (2023). https://doi.org/10.1007/s11277-023-10542-w
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11277-023-10542-w