Abstract
In this paper, a novel region-based multi-focus color image fusion method is proposed, which employs the focused edges extracted from the source images to obtain a fused image with better focus. At first, the edges are obtained from the source images, using two suitable edge operators (Zero-cross and Canny). Then, a block-wise region comparison is performed to extract out the focused edges which have been morphologically dilated, followed by the selection of the largest component to remove isolated points. Any discontinuity in the detected edges is removed by consulting with the edge detection output from the Canny edge operator. The best reconstructed edge image is chosen, which is later converted into a focused region. Finally, the fused image is constructed by selecting pixels from the source images with the help of a prescribed color decision map. The proposed method has been implemented and tested on a set of real 2-D multi-focus image pairs (both gray-scale and color). The algorithm has a competitive performance with respect to the recent fusion methods in terms of subjective and objective evaluation.
Similar content being viewed by others
References
Aishwarya N, Bennila Thangammal C (2017) An image fusion framework using novel dictionary based sparse representation. Multimed Tools Appl 76(21):21869–21888
Ardeshir Goshtasby A, Nikolov S (2007) Guest editorial: Image fusion: Advances in the state of the art. Inf Fusion 8(2):114–118
Canny J (1987) A computational approach to edge detection. In: Readings in computer vision. Elsevier, pp 184–203
Du C, Gao S (2017) Image segmentation-based multi-focus image fusion through multi-scale convolutional neural network. IEEE Access 5:15750–15761
Du J, Li G, Sun Y, Kong J, Bo T (2019) Gesture recognition based on skeletonization algorithm and cnn with asl database. Multimed Tools Appl 78(21):29953–29970
Duan J, Meng G, Xiang S, Pan C (2014) Multifocus image fusion via focus segmentation and region reconstruction. Neurocomputing 140:193–209
Geng P, Huang M, Liu S, Feng J, Bao P (2016) Multifocus image fusion method of Ripplet transform based on cycle spinning. Multimed Tools Appl 75 (17):10583–10593
Gonzalez R C, Woods R E (1977) Digital Image Processing 28(4)
Haghighat MBA, Aghagolzadeh A, Seyedarabi H (2011) A non-reference image fusion metric based on mutual information of image features. Comput Electr Eng 37 (5):744–756
Huang W, Jing Z (2007) Evaluation of focus measures in multi-focus image fusion. Pattern Recogn Lett 28(4):493–500
Huang W, Jing Z (2007) Multi-focus image fusion using pulse coupled neural network. Pattern Recogn Lett 28(9):1123–1132
Li S, Yang B, Hu J (2011) Performance comparison of different multi-resolution transforms for image fusion. Inf Fusion 12(2):74–84
Li S, Kang X, Hu J (2013) Image fusion with guided filtering. IEEE Trans Image process 22(7):2864–2875
Li S, Kang X, Hu J, Yang B (2013) Image matting for fusion of multi-focus images in dynamic scenes. Inf Fusion 14(2):147–162
Li S, Kang X, Fang L, Hu J, Yin H (2017) Pixel-level image fusion: a survey of the state of the art. Inf Fusion 33:100–112
Li G, Zhang L, Sun Y, Kong J (2019) Towards the semg hand: internet of things sensors and haptic feedback application. Multimed Tools Appl 78(21):29765–29782
Luo X, Zhang J, Dai Q (2012) A regional image fusion based on similarity characteristics. Signal Process 92(5):1268–1280
Ma R, Zhang L, Li G, Jiang D, Xu S, Chen D (2020) Grasping force prediction based on semg signals. Alexandria Engineering Journal
Meher B, Agrawal S, Panda R, Abraham A (2019) A survey on region based image fusion methods. Inf Fusion 48:119–132
Mitianoudis N, Stathaki T (2007) Pixel-based and region-based image fusion schemes using ICA bases. Inf Fusion 8(2):131–142
Paul S, Sevcenco IS, Agathoklis P (2016) Multi-exposure and multi-focus image fusion in gradient domain. J Circ Syst Comput 25(10):1650123
Piella G, Heijmans H (2003) A new quality metric for image fusion. In: Proceedings 2003 International Conference on Image Processing (Cat. No. 03CH37429), vol 3. IEEE, pp III–173
Sun Y, Xu C, Li G, Xu W, Kong J, Jiang D, Tao B, Chen D (2020) Intelligent human computer interaction based on non redundant emg signal. Alexandria Engineering Journal
Tan W, Zhou H, Rong S, Qian K, Yu Y (2018) Fusion of multi-focus images via a gaussian curvature filter and synthetic focusing degree criterion. Appl Opt 57(35):10092–10101
Tian J, Li C, Ma L, Yu W (2011) Multi-focus image fusion using a bilateral gradient-based sharpness criterion. Opt Commun 284(1):80–87
Vincent L (1993) Morphological grayscale reconstruction in image analysis: Applications and efficient algorithms. IEEE Trans Image Process 2(2):176–201
Wan T, Zhu C, Qin Z (2013) Multifocus image fusion based on robust principal component analysis. Pattern Recogn Lett 34(9):1001–1008
Xydeas C S, Petrovic V (2000) Objective image fusion performance measure. Electron Lett 36(4):308–309
Yajie W, Ye Y, Xiaoyan R, Wu Y, Xiangbin S (2015) A multi-focus color image fusion method based on edge detection. In: The 27th Chinese Control and Decision Conference (2015 CCDC). IEEE, pp 4294–4299
Yao Y, Roui-Abidi B, Doggaz N, Abidi MA (2006) Evaluation of sharpness measures and search algorithms for the auto focusing of high-magnification images. In Visual Information Processing
Yu L, Chen X, Hu P, Wang Z (2017) Multi-focus image fusion with a deep convolutional neural network. Inf Fusion 36:191–207
Yu L, Wang Z (2013) Multi-focus image fusion based on wavelet transform and adaptive block. J Image Graph 18(11):1435–1444
Yu L, Wang Z (2014) Simultaneous image fusion and denoising with adaptive sparse representation. IET Image Process 9(5):347–357
Yu L, Chen X, Ward RK, Jane Wang Z (2016) Image fusion with convolutional sparse representation. IEEE Signal Process Lett 23(12):1882–1886
Yu Z, Bai X, Wang T (2017) Boundary finding based multi-focus image fusion through multi-scale morphological focus-measure. Inf Fusion 35:81–101
Yu L, Chen X, Wang Z, Jane Wang Z, Ward RK, Wang X (2018) Deep learning for pixel-level image fusion Recent advances and future prospects. Inf Fusion 42:158–173
Yu L, Chen X, Ward RK, Jane Wang Z (2019) Medical image fusion via convolutional sparsity based morphological component analysis. IEEE Signal Process Lett 26(3):485–489
Zhan K, Xie Y, Wang H, Min Y (2017) Fast filtering image fusion. J Electron Imaging 26(6):063004
Zhang B, Lu X, Pei H, He L, Zhao Y, Zhou W (2016) Multi-focus image fusion algorithm based on focused region extraction. Neurocomputing 174:733–748
Zhang Q, Liu Y, Blum RS, Han J, Tao D (2018) Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: A review. Inf Fusion 40:57–75
Zhao J, Laganiere R, Liu Z (2007) Performance assessment of combinative pixel-level image fusion based on an absolute feature measurement. Int J Innov Comput Inf Control 3(6):1433–1447
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Roy, M., Mukhopadhyay, S. A scheme for edge-based multi-focus Color image fusion. Multimed Tools Appl 79, 24089–24117 (2020). https://doi.org/10.1007/s11042-020-09116-w
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-020-09116-w