A fusion algorithm for medical structural and functional images based on adaptive image decomposition
Multimodal medical image fusion has been widely used as a powerful tool in the clinical applications because of its ability of enriching information of medical images. In this paper, a novel fusion algorithm dedicated to medical structural and functional image fusion. In the algorithm, textures from functional images are separated from the smooth component in structural images; then we need to segment the source images into two parts function-informative and function-uninformative regions by judging whether each pixel in the functional image contains informative color (black pixel is meaningless); then get the smooth version of fused image by filling the function-informative region with the color from functional image and filling the function-uninformative region with smooth component in structural images; finally, smooth version of fused image and textures from the functional image are combined to get the final fused image. The attractive features of the algorithms include its ability of both color and texture reservation and low time consumption. Experimental results demonstrate that the proposed method can obtain state-of-the-art performance for medical structural and functional image fusion.
KeywordsMedical image fusion Image decomposition Color distortion Texture perseveration
The work was supported by National Natural Science Foundation of China (Grant No. 61672259, 61602203, 61876070, 61801190), Outstanding Young Talent Foundation of Jilin Province (Grant No. 20180520029JH) and China Postdoctoral Science Foundation (Grant No. 2017 M611323). We would also like to thank http://www.med.harvard.edu/aanlib/home.html for providing us the source medical images.
- 6.Calvi GG, Kisil I, Mandic DP (2018) Feature fusion via tensor network summation. In: 26th European Signal Processing Conference (EUSIPCO), Rome, Italy, pp 2623–2627Google Scholar
- 7.Chen T, Zhang J, Zhang Y (2005) Remote sensing image fusion based on ridgelet transform. In: IEEE International Geoscience and Remote Sensing Symposium, Coex, Seoul, Korea, pp 1150–1153Google Scholar
- 12.Ding M, Wei L, Wang B (2015) Research on fusion method for infrared and visible images via compressive sensing. Infrared Phys Technol 57(0):56–67Google Scholar
- 18.Holzinger A, Plass M, Holzinger K, Crişan GC, Pintea C-M, Palade V (2016) Towards interactive Machine Learning (iML): Applying Ant Colony Algorithms to Solve the Traveling Salesman Problem with the Human-in-the-Loop Approach. In: International Conference on Availability, Reliability, and Security, Salzburg, Austria, pp 81–95Google Scholar
- 20.Jinno T, Okuda M Multiple exposure fusion for high dynamic range image acquisition. IEEE Trans Image Process 21(1):358–365Google Scholar
- 30.Liu Z, Yin H, Fang B, Chai Y (2013) A novel fusion scheme for visible and infrared images based on compressive sensing. Opt Commun 335(0):168–177Google Scholar
- 32.Otsu N (1975) A threshold selection method from gray-level histograms. Automatica 11(285-296):23–27Google Scholar
- 35.Petrovic V, Xydeas C (2005) Objective evaluation of signal-level image fusion performance. Opt Eng 44(8):1–8Google Scholar
- 36.Piella G, Heijmans H (2003) A new quality metric for image fusion. In: International Conference on Image Processing, Barcelona, Spain, pp 173–176Google Scholar
- 44.Wan T, Canagarajah N, Achim A (2008) Compressive image fusion, in International Conference on Image Processing, San Diego, CA, USA, pp 1308–1311Google Scholar