Image inpainting technology can patch images with missing pixels. Existing methods propose convolutional neural networks to repair corrupted images. The network extracts effective pixels around the missing pixels and uses the encoding–decoding structure to extract valuable information to repair the vacancy. However, if the missing part is too large to provide useful information, the result will be fuzzy, color mixing, and object confusion. In order to patch the large hole image, we propose a new algorithm, the compression–decompression network, based on the research of existing methods. The compression network takes responsibility for inpainting and generating a down-sample image. The decompression network takes responsibility for extending the down-sample image into the original resolution. We use the residual network to construct the compression network and propose a similar pixel selection algorithm to expand the image, which is better than using the super-resolution network. We evaluate our model over Places2 and CelebA data set and use the similarity ratio as the metric. The result shows that our model has better performance when the inpainting task has many conflicts.
This is a preview of subscription content, access via your institution.
Buy single article
Instant access to the full article PDF.
Tax calculation will be finalised during checkout.
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
Tax calculation will be finalised during checkout.
It is not uploaded to any website. We can provide all code files if it is necessary.
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 25(2), 1097–1105 (2012)
Barnes, C., Shechtman, E., Finkelstein, A., Goldman, D.B.: PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Trans. Graph. (ToG) 28(3), 24 (2009)
Ballester, C., Bertalmio, M., Caselles, V., Sapiro, G., Verdera, J.: Filling-in by joint interpolation of vector fields and gray levels. IEEE Trans. Image Process. 10(8), 1200–1211 (2001)
Bertalmio, M., Sapiro, G., Caselles, V., Ballester, C.: ‘Image inpainting’, proceedings of the 27th annual conference on computer graphics and interactive techniques, pp. 417–424 (2000)
Esedoglu, S., Shen, J.: Digital inpainting based on the mumford-shah-euler image model. Eur. J. Appl. Math. 13(4), 353–370 (2002)
Liu, D., Sun, X., Wu, F., Li, S., Zhang, Y.-Q.: Image compression with edge-based inpainting. IEEE Trans. Circuits Syst. Video Technol. 17(10), 1273–1287 (2007)
Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., Efros, A. A.: Context encoders: feature learning by inpainting. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2536–2544 (2016)
Liu, G., Reda, F. A., Shih, K. J., Wang, T. C., Tao, A., Catanzaro, B.: Image inpainting for irregular holes using partial convolutions. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 85–100 (2018)
Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T. S.: Generative image inpainting with contextual attention. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5505–5514 (2018)
Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T. S: Free-form image inpainting with gated convolution. In: Proceedings of the IEEE international conference on computer vision, pp. 4471–4480 (2019)
Nazeri, K., Ng, E., Joseph, T., Qureshi, F., Ebrahimi, M.: Edgeconnect: generative image inpainting with adversarial edge learning. arXiv. https://arxiv.org/abs/1901.00212 (2019). Accessed 20 January 2020
Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., Torralba, A.: Places: a 10 million image database for scene recognition. IEEE Trans. Pattern Anal. Mach. Intel. 40(6), 1452–1464 (2017)
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Bengio, Y.: Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680 (2014)
Simonyan, K., Zisserman, A.: A very deep convolutional networks for large-scale image recognition. arXiv. https://arxiv.org/abs/1409.1556 (2014). Accessed 20 January 2020
Canny, J.: A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 6, 679–698 (1986)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778 (2016)
Xu, B., Wang, N., Chen, T., Li, M.: Empirical evaluation of rectified activations in convolutional network. arXiv. https://arxiv.org/abs/1505.00853 (2015). Accessed 20 January 2020
Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv. https://arxiv.org/abs/1502.03167 (2015). Accessed 20 January 2020
Kingma, D. P., & Ba, J. Adam (2014) A method for stochastic optimization. arXiv. https://arxiv.org/abs/1412.6980. Accessed 20 January 2020
Miyato, T., Kataoka, T., Koyama, M., Yoshida, Y.: Spectral normalization for generative adversarial networks. arXiv. https://arxiv.org/abs/1802.05957 (2018). Accessed 20 January 2020
Duchi, J., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12(7), 2121–2159 (2011)
Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2015)
Ledig, C., et al.: ‘Photo-realistic single image super-resolution using a generative adversarial network’, IEEE conference on computer vision and pattern recognition, pp. 4681–4690 (2017)
Upadhyay, U., Awate, S. P.: ‘Robust super-resolution GAN, with manifold-based and perception loss’, IEEE 16th international symposium on biomedical imaging, pp. 1372–1376 (2019)
Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: ‘Feedback network for image super-resolution’, IEEE conference on computer vision and pattern recognition, pp. 3867–3876 (2019)
Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lerer., A.: Automatic differentiation in pytorch. (2017)
Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)
Liu, Z., Luo, P., Wang, X., Tang, X.: Large-scale celebfaces attributes (celeba) dataset. https://mmlab.ie.cuhk.edu.hk/projects/CelebA.html (2018)
Hanif, M., Tonazzini, A., Savino, P., Salerno, E.: Non-local sparse image inpainting for document bleed-through removal. J. Imaging 4(5), 68 (2018)
Conflict of interest
The authors declare that there is no conflict of interests regarding the publication of this article.
Availability of data and material
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Wu, Z., Cui, Y. Edge missing image inpainting with compression–decompression network in low similarity images. Machine Vision and Applications 32, 37 (2021). https://doi.org/10.1007/s00138-020-01151-9
- Deep learning
- Convolutional neural network
- Image inpainting
- Compression–decompression network