Edge missing image inpainting with compression–decompression network in low similarity images

Abstract

Image inpainting technology can patch images with missing pixels. Existing methods propose convolutional neural networks to repair corrupted images. The network extracts effective pixels around the missing pixels and uses the encoding–decoding structure to extract valuable information to repair the vacancy. However, if the missing part is too large to provide useful information, the result will be fuzzy, color mixing, and object confusion. In order to patch the large hole image, we propose a new algorithm, the compression–decompression network, based on the research of existing methods. The compression network takes responsibility for inpainting and generating a down-sample image. The decompression network takes responsibility for extending the down-sample image into the original resolution. We use the residual network to construct the compression network and propose a similar pixel selection algorithm to expand the image, which is better than using the super-resolution network. We evaluate our model over Places2 and CelebA data set and use the similarity ratio as the metric. The result shows that our model has better performance when the inpainting task has many conflicts.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3

source pixel in the LR Output, and the most similar index is (i, j) in the LRGT. The white ‘Block’ locates the corresponding pixel block. The arrowed line represents copying operation

Fig. 4
Fig. 5
Fig. 6
Fig. 7

Code availability

It is not uploaded to any website. We can provide all code files if it is necessary.

References

  1. 1.

    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 25(2), 1097–1105 (2012)

    Google Scholar 

  2. 2.

    Barnes, C., Shechtman, E., Finkelstein, A., Goldman, D.B.: PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Trans. Graph. (ToG) 28(3), 24 (2009)

    Article  Google Scholar 

  3. 3.

    Ballester, C., Bertalmio, M., Caselles, V., Sapiro, G., Verdera, J.: Filling-in by joint interpolation of vector fields and gray levels. IEEE Trans. Image Process. 10(8), 1200–1211 (2001)

    MathSciNet  Article  Google Scholar 

  4. 4.

    Bertalmio, M., Sapiro, G., Caselles, V., Ballester, C.: ‘Image inpainting’, proceedings of the 27th annual conference on computer graphics and interactive techniques, pp. 417–424 (2000)

  5. 5.

    Esedoglu, S., Shen, J.: Digital inpainting based on the mumford-shah-euler image model. Eur. J. Appl. Math. 13(4), 353–370 (2002)

    MathSciNet  Article  Google Scholar 

  6. 6.

    Liu, D., Sun, X., Wu, F., Li, S., Zhang, Y.-Q.: Image compression with edge-based inpainting. IEEE Trans. Circuits Syst. Video Technol. 17(10), 1273–1287 (2007)

    Article  Google Scholar 

  7. 7.

    Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., Efros, A. A.: Context encoders: feature learning by inpainting. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2536–2544 (2016)

  8. 8.

    Liu, G., Reda, F. A., Shih, K. J., Wang, T. C., Tao, A., Catanzaro, B.: Image inpainting for irregular holes using partial convolutions. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 85–100 (2018)

  9. 9.

    Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T. S.: Generative image inpainting with contextual attention. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5505–5514 (2018)

  10. 10.

    Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T. S: Free-form image inpainting with gated convolution. In: Proceedings of the IEEE international conference on computer vision, pp. 4471–4480 (2019)

  11. 11.

    Nazeri, K., Ng, E., Joseph, T., Qureshi, F., Ebrahimi, M.: Edgeconnect: generative image inpainting with adversarial edge learning. arXiv. https://arxiv.org/abs/1901.00212 (2019). Accessed 20 January 2020

  12. 12.

    Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., Torralba, A.: Places: a 10 million image database for scene recognition. IEEE Trans. Pattern Anal. Mach. Intel. 40(6), 1452–1464 (2017)

    Article  Google Scholar 

  13. 13.

    Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Bengio, Y.: Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680 (2014)

  14. 14.

    Simonyan, K., Zisserman, A.: A very deep convolutional networks for large-scale image recognition. arXiv. https://arxiv.org/abs/1409.1556 (2014). Accessed 20 January 2020

  15. 15.

    Canny, J.: A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 6, 679–698 (1986)

    Article  Google Scholar 

  16. 16.

    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778 (2016)

  17. 17.

    Xu, B., Wang, N., Chen, T., Li, M.: Empirical evaluation of rectified activations in convolutional network. arXiv. https://arxiv.org/abs/1505.00853 (2015). Accessed 20 January 2020

  18. 18.

    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv. https://arxiv.org/abs/1502.03167 (2015). Accessed 20 January 2020

  19. 19.

    Kingma, D. P., & Ba, J. Adam (2014) A method for stochastic optimization. arXiv. https://arxiv.org/abs/1412.6980. Accessed 20 January 2020

  20. 20.

    Miyato, T., Kataoka, T., Koyama, M., Yoshida, Y.: Spectral normalization for generative adversarial networks. arXiv. https://arxiv.org/abs/1802.05957 (2018). Accessed 20 January 2020

  21. 21.

    Duchi, J., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12(7), 2121–2159 (2011)

    MathSciNet  MATH  Google Scholar 

  22. 22.

    Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2015)

    Article  Google Scholar 

  23. 23.

    Ledig, C., et al.: ‘Photo-realistic single image super-resolution using a generative adversarial network’, IEEE conference on computer vision and pattern recognition, pp. 4681–4690 (2017)

  24. 24.

    Upadhyay, U., Awate, S. P.: ‘Robust super-resolution GAN, with manifold-based and perception loss’, IEEE 16th international symposium on biomedical imaging, pp. 1372–1376 (2019)

  25. 25.

    Li, Z., Yang, J., Liu, Z., Yang, X., Jeon, G., Wu, W.: ‘Feedback network for image super-resolution’, IEEE conference on computer vision and pattern recognition, pp. 3867–3876 (2019)

  26. 26.

    Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lerer., A.: Automatic differentiation in pytorch. (2017)

  27. 27.

    Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  28. 28.

    Liu, Z., Luo, P., Wang, X., Tang, X.: Large-scale celebfaces attributes (celeba) dataset. https://mmlab.ie.cuhk.edu.hk/projects/CelebA.html (2018)

  29. 29.

    Hanif, M., Tonazzini, A., Savino, P., Salerno, E.: Non-local sparse image inpainting for document bleed-through removal. J. Imaging 4(5), 68 (2018)

    Article  Google Scholar 

Download references

Funding

Not Available.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Yidong Cui.

Ethics declarations

Conflict of interest

The authors declare that there is no conflict of interests regarding the publication of this article.

Availability of data and material

Not Available.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Wu, Z., Cui, Y. Edge missing image inpainting with compression–decompression network in low similarity images. Machine Vision and Applications 32, 37 (2021). https://doi.org/10.1007/s00138-020-01151-9

Download citation

Keywords

  • Deep learning
  • Convolutional neural network
  • Image inpainting
  • Compression–decompression network