Skip to main content
Log in

Blind inpainting using the fully convolutional neural network

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Most of existing inpainting techniques require to know beforehandwhere those damaged pixels are, i.e., non-blind inpainting methods. However, in many applications, such information may not be readily available. In this paper, we propose a novel blind inpainting method based on a fully convolutional neural network. We term this method as blind inpainting convolutional neural network (BICNN). It purely cascades three convolutional layers to directly learn an end-to-end mapping between a pre-acquired dataset of corrupted/ground truth subimage pairs. Stochastic gradient descent with standard backpropagation is used to train the BICNN. Once the BICNN is learned, it can automatically identify and remove the corrupting patterns from a corrupted image without knowing the specific regions. The learned BICNN takes a corrupted image of any size as input and directly produces a clean output by only one pass of forward propagation. Experimental results indicate that the proposed method can achieve a better inpainting performance than the existing inpainting methods for various corrupting patterns.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Bertalmio, M., Sapiro, G., Caselles, V., Ballester, C.: Image inpainting. In: 27th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH’00), pp. 417–424 (2000)

  2. Guillemot, C., Le Meur, O.: Image inpainting: overview and recent advances. IEEE Signal Processing Magazine 31(1), 127–144 (2014)

    Article  Google Scholar 

  3. Criminisi, A., Perez, P., Toyama, K.: Region filling and object removal by exemplar-based image inpainting. IEEE Transactions on Image Processing 13(9), 1200–1212 (2004)

    Article  Google Scholar 

  4. Drori, I., Cohen-Or, D., Yeshurun, H.: Fragment-based image completion. ACM Transactions on Graphics 22(2003), 303–312 (2005)

    Google Scholar 

  5. He, K., Sun, J.: Statistics of patch offsets for image completion. Proc. Eur. Conf. Comput. Vis. (ECCV) 7773, 16–29 (2012)

    MathSciNet  Google Scholar 

  6. Le Meur, O., Ebdelli, M., Guillemot, C.: Hierarchical super-resolution-based inpainting. IEEE Transactions on Image Processing 22(10), 3779–3790 (2013)

    Article  MathSciNet  Google Scholar 

  7. Chan, T.F., Shen, J.: Non-texture inpainting by curvature-driven diffusions (CDD). J. Vis. Commun. Image Represent. 4(12), 436–449 (2001)

    Article  Google Scholar 

  8. Masnou, S., Morel, J.-M.: Level lines based disocclusion. Int. Conf. Image Process. 3, 259–263 (1998)

    Google Scholar 

  9. Bertalmio, M., Bertozzi, A.L., Sapiro, G.: Navier-stokes, fluid dynamics, and image and video inpainting. IEEE Conf. Comput. Vis. Pattern Recognit. 1, 355–362 (2001)

    Google Scholar 

  10. Xu, Z., Sun, J.: Image inpainting by patch propagation using patch sparsity. IEEE Trans. Image Process. 19(5), 1153–1165 (2010)

    Article  MathSciNet  Google Scholar 

  11. Mairal, J., Elad, M., Sapiro, G.: Sparse representation for color image restoration. IEEE Trans. Image Process. 17(1), 53–69 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  12. Fadili, M.J., Starck, J.L., Murtagh, F.: Inpainting and zooming using sparse representations. Comput. J. 52(1), 64–79 (2009)

    Article  Google Scholar 

  13. Liu, J., Musialski, P., Wonka, P., Ye, J.: Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 208–220 (2013)

    Article  Google Scholar 

  14. Jain, V., Seung, H.S.: Natural image denoising with convolutional networks. In: 22nd Annual Conference on Neural Information Processing Systems, pp. 769–776 (2008)

  15. Burger, H.C., Schuler, C.J., Harmeling, S.: Image denoising: can plain neural networks compete with BM3D?. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4321–4328 (2012)

  16. Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution. ECCV 8692, 184–199 (2014)

    Google Scholar 

  17. Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks, (2015). arXiv:1501.00092

  18. Cui, Z., Chang, H., Shan, S., Zhong, B., Chen, X.: Deep network cascade for image super-resolution. In: ECCV, pp. 49–64 (2014)

  19. Schuler, C.J., Burger, H.C., Harmeling, S., Schölkopf, B.: A machine learning approach for non-blind image deconvolution. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1067–1074 (2013)

  20. Köhler, R., Schuler, C., Schölkopf, B., Harmeling, S.: Mask-specific inpainting with deep neural networks. Lect. Note. Comput. Sci. 8753, 523–534 (2014)

    Article  MathSciNet  Google Scholar 

  21. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  22. LeCun, Y.A., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W., Jackel, L.D.: Backpropagation applied to handwritten zip code recognition. Neural Comput. 1(4), 541–551 (1989)

    Article  Google Scholar 

  23. Nair, V., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: International Conference on Machine Learning, pp. 807–814 (2010)

  24. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: International Conference on Artificial Intelligence and Statistics (AISTATS’10), pp. 249–256 (2010)

  25. Bouvrie, J.: Notes on convolutioanl neural networks. MIT CBCL Tech Report, pp. 38–44 (2006)

  26. Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: convolutional architecture for fast feature embedding (2014). arXiv:1408.5093

  27. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR, pp. 248–255 (2009)

  28. Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: 8th International Conference on Computer Vision, ICCV, vol. 2, pp. 416–423 (2001)

Download references

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (Grant Nos. 61001179, 61372173, 61471132 and 61201393) and the Guangdong Higher Education Engineering Technology Research Center (No. 501130144). We also thank to Rolf Köhler for his help that he provides a lot of materials in [20] to us.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nian Cai.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cai, N., Su, Z., Lin, Z. et al. Blind inpainting using the fully convolutional neural network. Vis Comput 33, 249–261 (2017). https://doi.org/10.1007/s00371-015-1190-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-015-1190-z

Keywords

Navigation