Multi-focus Image Fusion with PCA Filters of PCANet

  • Xu Song
  • Xiao-Jun Wu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11377)


As is well known to all, the training of deep learning model is time consuming and complex. Therefore, in this paper, a very simple deep learning model called PCANet is used to extract image features from multi-focus images. First, we train the two-stage PCANet using ImageNet to get PCA filters which will be used to extract image features. Using the feature maps of the first stage of PCANet, we generate activity level maps of source images by using nuclear norm. Then, the decision map is obtained through a series of post-processing operations on the activity level maps. Finally, the fused image is achieved by utilizing a weighted fusion rule. The experimental results demonstrate that the proposed method can achieve state-of-the-art fusion performance in terms of both objective assessment and visual quality.


Multi-focus image fusion PCA filters Nuclear norm 


  1. 1.
  2. 2.
    Aslantas, V., Bendes, E.: A new image quality metric for image fusion: the sum of the correlations of differences. AEU Int. J. Electron. Commun. 69(12), 1890–1896 (2015)CrossRefGoogle Scholar
  3. 3.
    Belhumeur, P.N., Hespanha, J.P., Kriegman, D.J.: Eigenfaces vs. fisherfaces: recognition using class specific linear projection. Technical report, Yale University New Haven United States (1997)Google Scholar
  4. 4.
    Chan, T.-H., Jia, K., Gao, S., Jiwen, L., Zeng, Z., Ma, Y.: PCANet: a simple deep learning baseline for image classification? IEEE Trans. Image Process. 24(12), 5017–5032 (2015)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Guo, L., Dai, M., Zhu, M.: Multifocus color image fusion based on quaternion curvelet transform. Opt. Express 20(17), 18846–18860 (2012)CrossRefGoogle Scholar
  6. 6.
    Haghighat, M., Razian,M.A.: Fast-FMI: non-reference image fusion metric. In: 2014 IEEE 8th International Conference on Application of Information and Communication Technologies (AICT), pp. 1–3. IEEE (2014)Google Scholar
  7. 7.
    Shreyamsha Kumar, B.K.: Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform. Signal Image Video Process. 7(6), 1125–1143 (2013)CrossRefGoogle Scholar
  8. 8.
    Shreyamsha Kumar, B.K.: Image fusion based on pixel significance using cross bilateral filter. Signal Image Video Process. 9(5), 1193–1204 (2015)CrossRefGoogle Scholar
  9. 9.
    Li, H., Manjunath, B.S., Mitra, S.K.: Multisensor image fusion using the wavelet transform. Graph. Models Image Process. 57(3), 235–245 (1995)CrossRefGoogle Scholar
  10. 10.
    Li, H., Wu, X.-J.: Multi-focus Image fusion using dictionary learning and low-rank representation. In: Zhao, Y., Kong, X., Taubman, D. (eds.) ICIG 2017. LNCS, vol. 10666, pp. 675–686. Springer, Cham (2017). Scholar
  11. 11.
    Li, H., Wu,X.-J.: Infrared and visible image fusion with ResNet and zero-phase component analysis. arXiv preprint arXiv:1806.07119 (2018)
  12. 12.
    Li,H., Wu, X.-J.: Multi-focus noisy image fusion using low-rank representation. arXiv preprint arXiv:1804.09325 (2018)
  13. 13.
    Li, H., Wu, X.-J., Kittler,J.: Infrared and visible image fusion using a deep learning framework. arXiv preprint arXiv:1804.06992 (2018)
  14. 14.
    Li, S., Kang, X., Fang, L., Jianwen, H., Yin, H.: Pixel-level image fusion: a survey of the state of the art. Inf. Fusion 33, 100–112 (2017)CrossRefGoogle Scholar
  15. 15.
    Li, S., Kang, X., Jianwen, H., Yang, B.: Image matting for fusion of multi-focus images in dynamic scenes. Inf. Fusion 14(2), 147–162 (2013)CrossRefGoogle Scholar
  16. 16.
    Liu, C.H., Qi, Y., Ding, W.R.: Infrared and visible image fusion method based on saliency detection in sparse domain. Infrared Phys. Technol. 83, 94–102 (2017)CrossRefGoogle Scholar
  17. 17.
    Liu, G., Lin, Z., Yu, Y.: Robust subspace segmentation by low-rank representation. In: Proceedings of the 27th International Conference on Machine Learning (ICML-2010), pp. 663–670 (2010)Google Scholar
  18. 18.
    Liu, Y., Chen, X., Peng, H., Wang, Z.: Multi-focus image fusion with a deep convolutional neural network. Inf. Fusion 36, 191–207 (2017)CrossRefGoogle Scholar
  19. 19.
    Liu, Y., Chen, X., Ward, R.K., Wang, Z.J.: Image fusion with convolutional sparse representation. IEEE Signal Process. Lett. 23(12), 1882–1886 (2016)CrossRefGoogle Scholar
  20. 20.
    Liu, Y., Liu, S., Wang, Z.: A general framework for image fusion based on multi-scale transform and sparse representation. Inf. Fusion 24, 147–164 (2015)CrossRefGoogle Scholar
  21. 21.
    Liu, Y., Liu, S., Wang, Z.: Multi-focus image fusion with dense sift. Inf. Fusion 23, 139–155 (2015)CrossRefGoogle Scholar
  22. 22.
    Russakovsky, O., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015)MathSciNetCrossRefGoogle Scholar
  23. 23.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  24. 24.
    Wang, L., Li, B., Tian, L.-F.: EGGDD: an explicit dependency model for multi-modal medical image fusion in shift-invariant shearlet transform domain. Inf. Fusion 19, 29–37 (2014)CrossRefGoogle Scholar
  25. 25.
    Yang, S., Wang, M., Jiao, L., Wu, R., Wang, Z.: Image fusion based on a new contourlet packet. Inf. Fusion 11(2), 78–84 (2010)CrossRefGoogle Scholar
  26. 26.
    Yang, Y., Yang, M., Huang, S., Ding, M., Sun, J.: Robust sparse representation combined with adaptive PCNN for multifocus image fusion. IEEE Access 6, 20138–20151 (2018)CrossRefGoogle Scholar
  27. 27.
    Yin, H., Li, Y., Chai, Y., Liu, Z., Zhu, Z.: A novel sparse-representation-based multi-focus image fusion approach. Neurocomputing 216, 216–229 (2016)CrossRefGoogle Scholar
  28. 28.
    Zeiler, M.D., Krishnan, D., Taylor, G.W., Fergus, R.: Deconvolutional networks (2010)Google Scholar
  29. 29.
    Zhang, Y., Bai, X., Wang, T.: Boundary finding based multi-focus image fusion through multi-scale morphological focus-measure. Inf. fusion 35, 81–101 (2017)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Xu Song
    • 1
  • Xiao-Jun Wu
    • 1
  1. 1.Jiangsu Provincial Engineering Laborator of Pattern Recognition and Computational IntelligenceJiangnan UniversityWuxiChina

Personalised recommendations