Skip to main content
Log in

Multi-focus image fusion with the all convolutional neural network

  • Published:
Optoelectronics Letters Aims and scope Submit manuscript

Abstract

A decision map contains complete and clear information about the image to be fused, which is crucial to various image fusion issues, especially multi-focus image fusion. However, in order to get a satisfactory image fusion effect, getting a decision map is very necessary and usually difficult to finish. In this letter, we address this problem with convolutional neural network (CNN), aiming to get a state-of-the-art decision map. The main idea is that the max-pooling of CNN is replaced by a convolution layer, the residuals are propagated backwards by gradient descent, and the training parameters of the individual layers of the CNN are updated layer by layer. Based on this, we propose a new all CNN (ACNN)-based multi-focus image fusion method in spatial domain. We demonstrate that the decision map obtained from the ACNN is reliable and can lead to high-quality fusion results. Experimental results clearly validate that the proposed algorithm can obtain state-of-the-art fusion performance in terms of both qualitative and quantitative evaluations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Chaoben Du and Shesheng Gao, IEEE Access 5, 15750 (2017).

    Article  Google Scholar 

  2. P. Burt and E. Adelson, IEEE Trans. Commun. 31, 532 (1983).

    Article  Google Scholar 

  3. A. Toet, Pattern Recognit. Lett. 9, 255 (1989).

    Article  Google Scholar 

  4. Li H, B. Manjunath and S. Mitra, Graphical Models Image Process. 57, 235 (1995).

    Article  Google Scholar 

  5. J. Lewis, R. O. Callaghan, S. Nikolov, D. Bull and N. Canagarajah, Inf. Fusion 8, 119 (2007).

    Article  Google Scholar 

  6. Zhang Q and Guo B, Signal Process. 89, 1334 (2009).

    Article  ADS  Google Scholar 

  7. G. Piella, Inf. Fusion 4, 259 (2003).

    Article  Google Scholar 

  8. Li X, Li H, Yu Z and Kong Y, Opt. Eng. 54, 073 (2015).

    Google Scholar 

  9. Liu Y, Liu S and Wang Z, Inf. Fusion 24, 147 (2015).

    Article  Google Scholar 

  10. Liu Z, Chai Y, Yin H, Zhou J and Zhu Z, Inf. Fusion 35, 102 (2017).

    Article  Google Scholar 

  11. W. Huang and Z. Jing, Pattern Recognit. Lett. 28, 493 (2007).

    Article  Google Scholar 

  12. S. Li, J. Kwok and Y. Wang, Inf. Fusion 2, 169 (2001).

    Article  Google Scholar 

  13. Li S, Kwok J and Wang Y, Pattern Recognit. Lett. 23, 985 (2002).

    Article  Google Scholar 

  14. V. Aslantas and R. Kurban, Appl. 37, 8861 (2010).

    Google Scholar 

  15. De I and Chanda B, Inf. Fusion 14, 136 (2013).

    Article  Google Scholar 

  16. Bai X, Zhang Y, Zhou F and Xue B, Inf. Fusion 22, 105 (2015).

    Article  Google Scholar 

  17. Li M, Cai W and Tan Z, Pattern Recognit. Lett. 27, 1948 (2006).

    Article  Google Scholar 

  18. Li S and Yang B, Image Vis. Comput. 26, 971 (2008).

    Article  Google Scholar 

  19. Li S, Kang X and Hu J, IEEE Trans. 22, 2864 (2015).

    Article  Google Scholar 

  20. Liu Y, Liu S and Wang Z, Inf. Fusion 23, 139 (2013).

    Article  Google Scholar 

  21. Zhou Z, Li S and Wang B, Inf. Fusion 20, 60 (2014).

    Article  Google Scholar 

  22. https://en.wikipedia.org/wiki/Deep_learning

  23. Goodfellow Ian J, Wardefarley David, Mirza Mehdi, Courville Aaron and Bengio Yoshua, Maxout Networks, Computer Science, 1319 (2013).

    Google Scholar 

  24. Stollenga Marijn F, Masci Jonathan, Gomez Faustino and Schmidhuber Jurgen, Advances in Neural Information Processing Systems 4, 3545 (2014).

    Google Scholar 

  25. Zeiler M D and Fergus R, Stochastic Pooling for Regularization of Deep Convolutional Neural Networks, Eprint Arxiv, 2013.

    Google Scholar 

  26. Lee Chen Y, Xie S, Gallagher Patrick, Zhang Z and Tu Z, Deeply Supervised Nets, In Deep Learning and Representation Learning Workshop, NIPS (2014).

    Google Scholar 

  27. JT Springenberg, A Dosovitskiy, T Brox and M Riedmiller, Striving for Simplicity: The All Convolutional Net, Eprint Arxiv, 2014.

    Google Scholar 

  28. Simonyan K and Zisserman A, Very Deep Convolutional Networks for Large-Scale Image Recognition, Computer Science, 2014.

    Google Scholar 

  29. LeCun Y, Bottou L and Bengio Y, Proceedings of the IEEE 86, 2278 (1998).

    Article  Google Scholar 

  30. Gulcehre C aglar, Cho KyungHyun, Pascanu Razvan and Bengio Yoshua, Learned-norm Pooling for Deep Feedforward and Recurrent Neural Networks, ECML (2014).

    Google Scholar 

  31. Jia Y, Huang C and Darrell T, Beyond Spatial Pyramids: Receptive Field Learning for Pooled Image Features, Computer Vision and Pattern Recognition, IEEE, 3370 (2012).

    Google Scholar 

  32. Behnke S, Hierarchical Neural Networks for Image Interpretation, Lecture Notes in Computer Science 2766, 1345 (2003).

    Google Scholar 

  33. Li S, Kwok J and Wang Y, Pattern Recognit. Lett. 23, 985 (2002).

    Article  Google Scholar 

  34. Gao J and Xu L, Neural Process. Lett. 43, 805 (2016).

    Article  Google Scholar 

  35. Liu Y, Chen X, Peng H and Wang Z, Inf. Fusion 36, 191 (2017).

    Article  Google Scholar 

  36. Zhang Y, Bai X and Wang T, Inf. Fusion 35, 81 (2017).

    Article  Google Scholar 

  37. Guo D, Yan J.W and Qu X, Opt. Commun. 338, 138 (2015).

    Article  ADS  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chao-ben Du  (杜超本).

Additional information

This work has been supported by the National Natural Science Foundation of China (No.61174193).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Du, Cb., Gao, Ss. Multi-focus image fusion with the all convolutional neural network. Optoelectron. Lett. 14, 71–75 (2018). https://doi.org/10.1007/s11801-018-7207-x

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11801-018-7207-x

Navigation