Advertisement

The Visual Computer

, Volume 33, Issue 2, pp 249–261 | Cite as

Blind inpainting using the fully convolutional neural network

  • Nian CaiEmail author
  • Zhenghang Su
  • Zhineng Lin
  • Han Wang
  • Zhijing Yang
  • Bingo Wing-Kuen Ling
Original Article

Abstract

Most of existing inpainting techniques require to know beforehandwhere those damaged pixels are, i.e., non-blind inpainting methods. However, in many applications, such information may not be readily available. In this paper, we propose a novel blind inpainting method based on a fully convolutional neural network. We term this method as blind inpainting convolutional neural network (BICNN). It purely cascades three convolutional layers to directly learn an end-to-end mapping between a pre-acquired dataset of corrupted/ground truth subimage pairs. Stochastic gradient descent with standard backpropagation is used to train the BICNN. Once the BICNN is learned, it can automatically identify and remove the corrupting patterns from a corrupted image without knowing the specific regions. The learned BICNN takes a corrupted image of any size as input and directly produces a clean output by only one pass of forward propagation. Experimental results indicate that the proposed method can achieve a better inpainting performance than the existing inpainting methods for various corrupting patterns.

Keywords

Image processing Blind inpainting Deep learning  Convolutional neural network 

Notes

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (Grant Nos. 61001179, 61372173, 61471132 and 61201393) and the Guangdong Higher Education Engineering Technology Research Center (No. 501130144). We also thank to Rolf Köhler for his help that he provides a lot of materials in [20] to us.

References

  1. 1.
    Bertalmio, M., Sapiro, G., Caselles, V., Ballester, C.: Image inpainting. In: 27th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH’00), pp. 417–424 (2000)Google Scholar
  2. 2.
    Guillemot, C., Le Meur, O.: Image inpainting: overview and recent advances. IEEE Signal Processing Magazine 31(1), 127–144 (2014)CrossRefGoogle Scholar
  3. 3.
    Criminisi, A., Perez, P., Toyama, K.: Region filling and object removal by exemplar-based image inpainting. IEEE Transactions on Image Processing 13(9), 1200–1212 (2004)CrossRefGoogle Scholar
  4. 4.
    Drori, I., Cohen-Or, D., Yeshurun, H.: Fragment-based image completion. ACM Transactions on Graphics 22(2003), 303–312 (2005)Google Scholar
  5. 5.
    He, K., Sun, J.: Statistics of patch offsets for image completion. Proc. Eur. Conf. Comput. Vis. (ECCV) 7773, 16–29 (2012)MathSciNetGoogle Scholar
  6. 6.
    Le Meur, O., Ebdelli, M., Guillemot, C.: Hierarchical super-resolution-based inpainting. IEEE Transactions on Image Processing 22(10), 3779–3790 (2013)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Chan, T.F., Shen, J.: Non-texture inpainting by curvature-driven diffusions (CDD). J. Vis. Commun. Image Represent. 4(12), 436–449 (2001)CrossRefGoogle Scholar
  8. 8.
    Masnou, S., Morel, J.-M.: Level lines based disocclusion. Int. Conf. Image Process. 3, 259–263 (1998)Google Scholar
  9. 9.
    Bertalmio, M., Bertozzi, A.L., Sapiro, G.: Navier-stokes, fluid dynamics, and image and video inpainting. IEEE Conf. Comput. Vis. Pattern Recognit. 1, 355–362 (2001)Google Scholar
  10. 10.
    Xu, Z., Sun, J.: Image inpainting by patch propagation using patch sparsity. IEEE Trans. Image Process. 19(5), 1153–1165 (2010)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Mairal, J., Elad, M., Sapiro, G.: Sparse representation for color image restoration. IEEE Trans. Image Process. 17(1), 53–69 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Fadili, M.J., Starck, J.L., Murtagh, F.: Inpainting and zooming using sparse representations. Comput. J. 52(1), 64–79 (2009)CrossRefGoogle Scholar
  13. 13.
    Liu, J., Musialski, P., Wonka, P., Ye, J.: Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 208–220 (2013)CrossRefGoogle Scholar
  14. 14.
    Jain, V., Seung, H.S.: Natural image denoising with convolutional networks. In: 22nd Annual Conference on Neural Information Processing Systems, pp. 769–776 (2008)Google Scholar
  15. 15.
    Burger, H.C., Schuler, C.J., Harmeling, S.: Image denoising: can plain neural networks compete with BM3D?. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4321–4328 (2012)Google Scholar
  16. 16.
    Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution. ECCV 8692, 184–199 (2014)Google Scholar
  17. 17.
    Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks, (2015). arXiv:1501.00092
  18. 18.
    Cui, Z., Chang, H., Shan, S., Zhong, B., Chen, X.: Deep network cascade for image super-resolution. In: ECCV, pp. 49–64 (2014)Google Scholar
  19. 19.
    Schuler, C.J., Burger, H.C., Harmeling, S., Schölkopf, B.: A machine learning approach for non-blind image deconvolution. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1067–1074 (2013)Google Scholar
  20. 20.
    Köhler, R., Schuler, C., Schölkopf, B., Harmeling, S.: Mask-specific inpainting with deep neural networks. Lect. Note. Comput. Sci. 8753, 523–534 (2014)MathSciNetCrossRefGoogle Scholar
  21. 21.
    LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)CrossRefGoogle Scholar
  22. 22.
    LeCun, Y.A., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W., Jackel, L.D.: Backpropagation applied to handwritten zip code recognition. Neural Comput. 1(4), 541–551 (1989)CrossRefGoogle Scholar
  23. 23.
    Nair, V., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: International Conference on Machine Learning, pp. 807–814 (2010)Google Scholar
  24. 24.
    Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: International Conference on Artificial Intelligence and Statistics (AISTATS’10), pp. 249–256 (2010)Google Scholar
  25. 25.
    Bouvrie, J.: Notes on convolutioanl neural networks. MIT CBCL Tech Report, pp. 38–44 (2006)Google Scholar
  26. 26.
    Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: convolutional architecture for fast feature embedding (2014). arXiv:1408.5093
  27. 27.
    Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR, pp. 248–255 (2009)Google Scholar
  28. 28.
    Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: 8th International Conference on Computer Vision, ICCV, vol. 2, pp. 416–423 (2001)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  • Nian Cai
    • 1
    Email author
  • Zhenghang Su
    • 1
  • Zhineng Lin
    • 1
  • Han Wang
    • 2
  • Zhijing Yang
    • 1
  • Bingo Wing-Kuen Ling
    • 1
  1. 1.School of Information EngineeringGuangdong University of TechnologyGuangzhouChina
  2. 2.School of Electromechanical EngineeringGuangdong University of TechnologyGuangzhouChina

Personalised recommendations