Skip to main content
Log in

A scheme for edge-based multi-focus Color image fusion

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

In this paper, a novel region-based multi-focus color image fusion method is proposed, which employs the focused edges extracted from the source images to obtain a fused image with better focus. At first, the edges are obtained from the source images, using two suitable edge operators (Zero-cross and Canny). Then, a block-wise region comparison is performed to extract out the focused edges which have been morphologically dilated, followed by the selection of the largest component to remove isolated points. Any discontinuity in the detected edges is removed by consulting with the edge detection output from the Canny edge operator. The best reconstructed edge image is chosen, which is later converted into a focused region. Finally, the fused image is constructed by selecting pixels from the source images with the help of a prescribed color decision map. The proposed method has been implemented and tested on a set of real 2-D multi-focus image pairs (both gray-scale and color). The algorithm has a competitive performance with respect to the recent fusion methods in terms of subjective and objective evaluation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20

Similar content being viewed by others

References

  1. Aishwarya N, Bennila Thangammal C (2017) An image fusion framework using novel dictionary based sparse representation. Multimed Tools Appl 76(21):21869–21888

    Article  Google Scholar 

  2. Ardeshir Goshtasby A, Nikolov S (2007) Guest editorial: Image fusion: Advances in the state of the art. Inf Fusion 8(2):114–118

    Article  Google Scholar 

  3. Canny J (1987) A computational approach to edge detection. In: Readings in computer vision. Elsevier, pp 184–203

  4. Du C, Gao S (2017) Image segmentation-based multi-focus image fusion through multi-scale convolutional neural network. IEEE Access 5:15750–15761

    Article  Google Scholar 

  5. Du J, Li G, Sun Y, Kong J, Bo T (2019) Gesture recognition based on skeletonization algorithm and cnn with asl database. Multimed Tools Appl 78(21):29953–29970

    Article  Google Scholar 

  6. Duan J, Meng G, Xiang S, Pan C (2014) Multifocus image fusion via focus segmentation and region reconstruction. Neurocomputing 140:193–209

    Article  Google Scholar 

  7. Geng P, Huang M, Liu S, Feng J, Bao P (2016) Multifocus image fusion method of Ripplet transform based on cycle spinning. Multimed Tools Appl 75 (17):10583–10593

    Article  Google Scholar 

  8. Gonzalez R C, Woods R E (1977) Digital Image Processing 28(4)

  9. Haghighat MBA, Aghagolzadeh A, Seyedarabi H (2011) A non-reference image fusion metric based on mutual information of image features. Comput Electr Eng 37 (5):744–756

    Article  MATH  Google Scholar 

  10. Huang W, Jing Z (2007) Evaluation of focus measures in multi-focus image fusion. Pattern Recogn Lett 28(4):493–500

    Article  Google Scholar 

  11. Huang W, Jing Z (2007) Multi-focus image fusion using pulse coupled neural network. Pattern Recogn Lett 28(9):1123–1132

    Article  Google Scholar 

  12. Li S, Yang B, Hu J (2011) Performance comparison of different multi-resolution transforms for image fusion. Inf Fusion 12(2):74–84

    Article  Google Scholar 

  13. Li S, Kang X, Hu J (2013) Image fusion with guided filtering. IEEE Trans Image process 22(7):2864–2875

    Article  Google Scholar 

  14. Li S, Kang X, Hu J, Yang B (2013) Image matting for fusion of multi-focus images in dynamic scenes. Inf Fusion 14(2):147–162

    Article  Google Scholar 

  15. Li S, Kang X, Fang L, Hu J, Yin H (2017) Pixel-level image fusion: a survey of the state of the art. Inf Fusion 33:100–112

    Article  Google Scholar 

  16. Li G, Zhang L, Sun Y, Kong J (2019) Towards the semg hand: internet of things sensors and haptic feedback application. Multimed Tools Appl 78(21):29765–29782

    Article  Google Scholar 

  17. Luo X, Zhang J, Dai Q (2012) A regional image fusion based on similarity characteristics. Signal Process 92(5):1268–1280

    Article  Google Scholar 

  18. Ma R, Zhang L, Li G, Jiang D, Xu S, Chen D (2020) Grasping force prediction based on semg signals. Alexandria Engineering Journal

  19. Meher B, Agrawal S, Panda R, Abraham A (2019) A survey on region based image fusion methods. Inf Fusion 48:119–132

    Article  Google Scholar 

  20. Mitianoudis N, Stathaki T (2007) Pixel-based and region-based image fusion schemes using ICA bases. Inf Fusion 8(2):131–142

    Article  MATH  Google Scholar 

  21. Paul S, Sevcenco IS, Agathoklis P (2016) Multi-exposure and multi-focus image fusion in gradient domain. J Circ Syst Comput 25(10):1650123

    Article  Google Scholar 

  22. Piella G, Heijmans H (2003) A new quality metric for image fusion. In: Proceedings 2003 International Conference on Image Processing (Cat. No. 03CH37429), vol 3. IEEE, pp III–173

  23. Sun Y, Xu C, Li G, Xu W, Kong J, Jiang D, Tao B, Chen D (2020) Intelligent human computer interaction based on non redundant emg signal. Alexandria Engineering Journal

  24. Tan W, Zhou H, Rong S, Qian K, Yu Y (2018) Fusion of multi-focus images via a gaussian curvature filter and synthetic focusing degree criterion. Appl Opt 57(35):10092–10101

    Article  Google Scholar 

  25. Tian J, Li C, Ma L, Yu W (2011) Multi-focus image fusion using a bilateral gradient-based sharpness criterion. Opt Commun 284(1):80–87

    Article  Google Scholar 

  26. Vincent L (1993) Morphological grayscale reconstruction in image analysis: Applications and efficient algorithms. IEEE Trans Image Process 2(2):176–201

    Article  Google Scholar 

  27. Wan T, Zhu C, Qin Z (2013) Multifocus image fusion based on robust principal component analysis. Pattern Recogn Lett 34(9):1001–1008

    Article  Google Scholar 

  28. Xydeas C S, Petrovic V (2000) Objective image fusion performance measure. Electron Lett 36(4):308–309

    Article  Google Scholar 

  29. Yajie W, Ye Y, Xiaoyan R, Wu Y, Xiangbin S (2015) A multi-focus color image fusion method based on edge detection. In: The 27th Chinese Control and Decision Conference (2015 CCDC). IEEE, pp 4294–4299

  30. Yao Y, Roui-Abidi B, Doggaz N, Abidi MA (2006) Evaluation of sharpness measures and search algorithms for the auto focusing of high-magnification images. In Visual Information Processing

  31. Yu L, Chen X, Hu P, Wang Z (2017) Multi-focus image fusion with a deep convolutional neural network. Inf Fusion 36:191–207

    Article  Google Scholar 

  32. Yu L, Wang Z (2013) Multi-focus image fusion based on wavelet transform and adaptive block. J Image Graph 18(11):1435–1444

    Google Scholar 

  33. Yu L, Wang Z (2014) Simultaneous image fusion and denoising with adaptive sparse representation. IET Image Process 9(5):347–357

    Google Scholar 

  34. Yu L, Chen X, Ward RK, Jane Wang Z (2016) Image fusion with convolutional sparse representation. IEEE Signal Process Lett 23(12):1882–1886

    Article  Google Scholar 

  35. Yu Z, Bai X, Wang T (2017) Boundary finding based multi-focus image fusion through multi-scale morphological focus-measure. Inf Fusion 35:81–101

    Article  Google Scholar 

  36. Yu L, Chen X, Wang Z, Jane Wang Z, Ward RK, Wang X (2018) Deep learning for pixel-level image fusion Recent advances and future prospects. Inf Fusion 42:158–173

    Article  Google Scholar 

  37. Yu L, Chen X, Ward RK, Jane Wang Z (2019) Medical image fusion via convolutional sparsity based morphological component analysis. IEEE Signal Process Lett 26(3):485–489

    Article  Google Scholar 

  38. Zhan K, Xie Y, Wang H, Min Y (2017) Fast filtering image fusion. J Electron Imaging 26(6):063004

    Article  Google Scholar 

  39. Zhang B, Lu X, Pei H, He L, Zhao Y, Zhou W (2016) Multi-focus image fusion algorithm based on focused region extraction. Neurocomputing 174:733–748

    Article  Google Scholar 

  40. Zhang Q, Liu Y, Blum RS, Han J, Tao D (2018) Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: A review. Inf Fusion 40:57–75

    Article  Google Scholar 

  41. Zhao J, Laganiere R, Liu Z (2007) Performance assessment of combinative pixel-level image fusion based on an absolute feature measurement. Int J Innov Comput Inf Control 3(6):1433–1447

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Manali Roy.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Roy, M., Mukhopadhyay, S. A scheme for edge-based multi-focus Color image fusion. Multimed Tools Appl 79, 24089–24117 (2020). https://doi.org/10.1007/s11042-020-09116-w

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-020-09116-w

Keywords

Navigation