A new focus evaluation operator based on max–min filter and its application in high quality multi-focus image fusion

  • Shuaiqi LiuEmail author
  • Yucong Lu
  • Jie Wang
  • Shaohai Hu
  • Jie Zhao
  • Zhihui Zhu


Multi-focus image fusion plays an important role in the field of image recognition and analysis. However, current focus evaluation operators are complex and inefficiency. In this paper, a new focus evaluation operator based on max–min filter is proposed. In new focus measure, we use max–min filter with the help of average filter and median filter (MMAM) to evaluate the focus degree of source images. This evaluation algorithm can well measure the sharpness of different regions of the image, and the selected clear region will be more useful for human visual or machine perception. The experiment proved that MMAM can perform better than sum-of-modified-Laplacian in most of cases. Later, MMAM is used to fused multi-focus image by combined structure-driven fused regions and depth information of blurred images. Experimental results demonstrate that the proposed algorithm outperforms the state-of-the-art fusion algorithms on image quality and objective fusion criteria. This paper firstly proposes the concept and computational processing of MMAM, which provides a new research direction and innovative idea for multi-focus image fusion base on filter, and MMAM can embedded into state-of-the-art fusion algorithm to achieve high quality multi-focus image fusion.


Image fusion Multi-focus images Max–min filter Structure-driven fused regions Depth information 



The authors are grateful to Dr. QuXiaobo for sharing the SSID code and multi-focus source images used in this paper. This work was supported by the High-Performance Computing Center of Hebei University. Moreover, our work was supported in part by Natural Science Foundation of China under Grant 61401308 and 61572063, Natural Science Foundation of Hebei Province under Grant F2018210148 and F2016201142, Science Research Project of Hebei Province under Grant QN2016085, Natural Science Foundation of Hebei University under Grant 2014-303, Opening Foundation of Machine vision Engineering Research Center of Hebei Province under Grant 2018HBMV02. We also thank the Editor and Reviewers for the efforts made in processing this submission and we are particularly grateful to the reviewers for their constructive comments and suggestions which help us improve the quality of this paper.


  1. Amin-Naji, M., & Aghagolzadeh, A. (2018). Multi-focus image fusion in DCT domain using variance and energy of Laplacian and correlation coefficient for visual sensor networks. Journal of AI and Data Mining, 6(2), 233–250.Google Scholar
  2. Geng, P., Huang, M., Liu, S., et al. (2016). Multifocus image fusion method of Ripplet transform based on cycle spinning. Multimedia Tools & Applications, 75(17), 10583–10593.CrossRefGoogle Scholar
  3. Geng, P., Wang, Z., Zhang, Z., et al. (2012). Image fusion by pulse couple neural network with Shearlet. Optical Engineering, 51(6), 067005-1–067005-7.CrossRefGoogle Scholar
  4. Guo, X., Nie, R., Cao, J., et al. (2018). Fully convolutional network-based multifocus image fusion. Neural Computation, 30(7), 1775–1800.MathSciNetCrossRefGoogle Scholar
  5. Guo, D., Yan, J., & Qu, X. (2015). High quality multi-focus image fusion using self-similarity and depth information. Optics Communications, 338(1), 138–144.CrossRefGoogle Scholar
  6. Huang, W., & Jing, Z. L. (2007). Evaluation of focus measures in multi-focus image fusion. Pattern Recognition Letters, 28(4), 493–500.CrossRefGoogle Scholar
  7. Jia, Y. H. (1998). Fusion of Landsat TM and SAR images based on principal component analysis. Remote Sensing Technology and Application, 13(1), 46–49.MathSciNetGoogle Scholar
  8. Li, S. T., Kang, X. D., & Hu, J. W. (2013). Image fusion with guided filtering. IEEE Transactions on Image Processing, 22(7), 2864–2875.CrossRefGoogle Scholar
  9. Liu, Y., Chen, X., Peng, H., et al. (2017a). Multi-focus image fusion with a deep convolutional neural network. Information Fusion, 36, 191–207.CrossRefGoogle Scholar
  10. Liu, K., Guo, L., & Chen, J. S. (2011). Contourlet transform for image fusion using cycle spinning. Journal of Systems Engineering and Electronics, 22(2), 353–357.CrossRefGoogle Scholar
  11. Liu, S., Shi, M., Zhu, Z., et al. (2017b). Image fusion based on complex-shearlet domain with guided filtering. Multidimensional Systems and Signal Processing, 28(1), 207–224.CrossRefzbMATHGoogle Scholar
  12. Liu, S., Wang, J., Lu, Y., et al. (2019). Multi-focus image fusion based on adaptive dual-channel spiking cortical model in nonsubsampled Shearlet domain. IEEE Access, 7(1), 56367–56388.CrossRefGoogle Scholar
  13. Liu, X. Y., Wang, W. H., & Sun, Y. (2007). Dynamic evaluation of autofocusing for automated microscopic analysis of blood smear and pap smear. Journal of Microscopy, 227(1), 15–23.MathSciNetCrossRefGoogle Scholar
  14. Liu, S. Q., Zhao, J., Geng, P., et al. (2014). Medical image fusion based on nonsubsampled direction complex wavelet transform. International Journal of Applied Mathematics and Machine Learning, 1(1), 21–34.Google Scholar
  15. Liu, S., Zhu, Z., Li, H., et al. (2016). Multi-focus image fusion using self-similarity and depth information in nonsubsampled Shearlet transform domain. International Journal of Signal Processing, Image Processing and Pattern Recognition, 9(1), 347–360.CrossRefGoogle Scholar
  16. Ma, X. L., Hu, S. H., Liu, S. Q., et al. (2019). Multi-focus image fusion based on joint sparse representation and optimum theory. Signal Processing: Image Communication, 78, 125–134.Google Scholar
  17. Oliver, R., & Thomas, F. (1998). Pixel-level image fusion: The case of image sequences. Proceedings of SPIE, 3374, 378–388.CrossRefGoogle Scholar
  18. Pajares, G., & de la Cruz, J. M. (2004). A wavelet-based image fusion tutorial. Pattern Recognition, 37(9), 1855–1872.CrossRefGoogle Scholar
  19. Qu, X. B., Yan, J. W., Xiao, H. Z., et al. (2008). Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain. Acta Automatica Sinica, 34(12), 1508–1514.CrossRefzbMATHGoogle Scholar
  20. Qu, X. B., Yan, J. W., & Yang, G. D. (2009). Sum-modified-Laplacian-based multi-focus image fusion method in sharp frequency localized Contourlet transform domain. Optics and Processing Engineering, 17(5), 1203–1212.Google Scholar
  21. Tang, H., Xiao, B., Li, W., et al. (2017). Pixel convolutional neural network for multi-focus image fusion. Information Sciences, 433, 125–141.MathSciNetGoogle Scholar
  22. Wan, T., Zhu, C. C., & Qin, Z. C. (2013). Multifocus image fusion based on robust principal component analysis. Pattern Recognition Letters, 34(9), 1001–1008.CrossRefGoogle Scholar
  23. Wang, N., Ma, Y., Wang, W., et al. (2015). Multifocus image fusion based on nonsubsampled contourlet transform and spiking cortical model. Neural Network World, 25(6), 623–639.CrossRefGoogle Scholar
  24. Zhang, Y., Bai, X., & Wang, T. (2017). Boundary finding based multi-focus image fusion through multi-scale morphological focus-measure. Information fusion, 35, 81–101.CrossRefGoogle Scholar
  25. Zhang, Z., & Blum, R. S. (1999). A categorization of multiscale decomposition-based image fusion schemes with a performance study for a digital camera application. Proceedings of the IEEE, 87(8), 1315–1326.CrossRefGoogle Scholar
  26. Zhuo, S., & Sim, T. (2011). Defocus map estimation from a single image. Pattern Recognition, 44(9), 1852–1858.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.College of Electronic and Information EngineeringHebei UniversityBaodingChina
  2. 2.Machine Vision Engineering Research Center of Hebei ProvinceBaodingChina
  3. 3.College of Computer and InformationBeijing Jiaotong UniversityBeijingChina
  4. 4.Whiting School of EngineeringThe Johns Hopkins UniversityBaltimoreUSA

Personalised recommendations