Abstract
Most of existing inpainting techniques require to know beforehandwhere those damaged pixels are, i.e., non-blind inpainting methods. However, in many applications, such information may not be readily available. In this paper, we propose a novel blind inpainting method based on a fully convolutional neural network. We term this method as blind inpainting convolutional neural network (BICNN). It purely cascades three convolutional layers to directly learn an end-to-end mapping between a pre-acquired dataset of corrupted/ground truth subimage pairs. Stochastic gradient descent with standard backpropagation is used to train the BICNN. Once the BICNN is learned, it can automatically identify and remove the corrupting patterns from a corrupted image without knowing the specific regions. The learned BICNN takes a corrupted image of any size as input and directly produces a clean output by only one pass of forward propagation. Experimental results indicate that the proposed method can achieve a better inpainting performance than the existing inpainting methods for various corrupting patterns.
Similar content being viewed by others
References
Bertalmio, M., Sapiro, G., Caselles, V., Ballester, C.: Image inpainting. In: 27th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH’00), pp. 417–424 (2000)
Guillemot, C., Le Meur, O.: Image inpainting: overview and recent advances. IEEE Signal Processing Magazine 31(1), 127–144 (2014)
Criminisi, A., Perez, P., Toyama, K.: Region filling and object removal by exemplar-based image inpainting. IEEE Transactions on Image Processing 13(9), 1200–1212 (2004)
Drori, I., Cohen-Or, D., Yeshurun, H.: Fragment-based image completion. ACM Transactions on Graphics 22(2003), 303–312 (2005)
He, K., Sun, J.: Statistics of patch offsets for image completion. Proc. Eur. Conf. Comput. Vis. (ECCV) 7773, 16–29 (2012)
Le Meur, O., Ebdelli, M., Guillemot, C.: Hierarchical super-resolution-based inpainting. IEEE Transactions on Image Processing 22(10), 3779–3790 (2013)
Chan, T.F., Shen, J.: Non-texture inpainting by curvature-driven diffusions (CDD). J. Vis. Commun. Image Represent. 4(12), 436–449 (2001)
Masnou, S., Morel, J.-M.: Level lines based disocclusion. Int. Conf. Image Process. 3, 259–263 (1998)
Bertalmio, M., Bertozzi, A.L., Sapiro, G.: Navier-stokes, fluid dynamics, and image and video inpainting. IEEE Conf. Comput. Vis. Pattern Recognit. 1, 355–362 (2001)
Xu, Z., Sun, J.: Image inpainting by patch propagation using patch sparsity. IEEE Trans. Image Process. 19(5), 1153–1165 (2010)
Mairal, J., Elad, M., Sapiro, G.: Sparse representation for color image restoration. IEEE Trans. Image Process. 17(1), 53–69 (2008)
Fadili, M.J., Starck, J.L., Murtagh, F.: Inpainting and zooming using sparse representations. Comput. J. 52(1), 64–79 (2009)
Liu, J., Musialski, P., Wonka, P., Ye, J.: Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 208–220 (2013)
Jain, V., Seung, H.S.: Natural image denoising with convolutional networks. In: 22nd Annual Conference on Neural Information Processing Systems, pp. 769–776 (2008)
Burger, H.C., Schuler, C.J., Harmeling, S.: Image denoising: can plain neural networks compete with BM3D?. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4321–4328 (2012)
Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution. ECCV 8692, 184–199 (2014)
Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks, (2015). arXiv:1501.00092
Cui, Z., Chang, H., Shan, S., Zhong, B., Chen, X.: Deep network cascade for image super-resolution. In: ECCV, pp. 49–64 (2014)
Schuler, C.J., Burger, H.C., Harmeling, S., Schölkopf, B.: A machine learning approach for non-blind image deconvolution. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1067–1074 (2013)
Köhler, R., Schuler, C., Schölkopf, B., Harmeling, S.: Mask-specific inpainting with deep neural networks. Lect. Note. Comput. Sci. 8753, 523–534 (2014)
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
LeCun, Y.A., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W., Jackel, L.D.: Backpropagation applied to handwritten zip code recognition. Neural Comput. 1(4), 541–551 (1989)
Nair, V., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: International Conference on Machine Learning, pp. 807–814 (2010)
Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: International Conference on Artificial Intelligence and Statistics (AISTATS’10), pp. 249–256 (2010)
Bouvrie, J.: Notes on convolutioanl neural networks. MIT CBCL Tech Report, pp. 38–44 (2006)
Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: convolutional architecture for fast feature embedding (2014). arXiv:1408.5093
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR, pp. 248–255 (2009)
Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: 8th International Conference on Computer Vision, ICCV, vol. 2, pp. 416–423 (2001)
Acknowledgments
This work was supported in part by the National Natural Science Foundation of China (Grant Nos. 61001179, 61372173, 61471132 and 61201393) and the Guangdong Higher Education Engineering Technology Research Center (No. 501130144). We also thank to Rolf Köhler for his help that he provides a lot of materials in [20] to us.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Cai, N., Su, Z., Lin, Z. et al. Blind inpainting using the fully convolutional neural network. Vis Comput 33, 249–261 (2017). https://doi.org/10.1007/s00371-015-1190-z
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00371-015-1190-z