Skip to main content
Log in

Image Inpainting: A Review

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Although image inpainting, or the art of repairing the old and deteriorated images, has been around for many years, it has recently gained even more popularity, because of the recent development in image processing techniques. With the improvement of image processing tools and the flexibility of digital image editing, automatic image inpainting has found important applications in computer vision and has also become an important and challenging topic of research in image processing. This paper reviews the existing image inpainting approaches, that were classified into three subcategories, sequential-based, CNN-based, and GAN-based methods. In addition, for each category, a list of methods for different types of distortion on images are presented. Furthermore, the paper also presents available datasets. Last but not least, we present the results of real evaluations of the three categories of image inpainting methods performed on the used datasets, for different types of image distortion. We also present the evaluations metrics and discuss the performance of these methods in terms of these metrics. This overview can be used as a reference for image inpainting researchers, and it can also facilitate the comparison of the methods as well as the datasets used. The main contribution of this paper is the presentation of the three categories of image inpainting methods along with a list of available datasets that the researchers can use to evaluate their proposed methodology against.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. http://places2.csail.mit.edu/download.htm.

  2. http://www.cad.zju.edu.cn/home/dengcai/Data/depthinpaint/DepthInpaintData.html.

  3. https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/.

  4. http://image-net.org/.

  5. http://sipi.usc.edu/database/.

  6. http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html.

  7. http://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes.

  8. http://cocodataset.org/#home.

  9. http://www.tamirhassan.com/html/competition.html.

  10. https://robotvault.bitbucket.io/scenenet-rgbd.html.

  11. https://ai.stanford.edu/jkrause/cars/car/dataset.html.

  12. https://www.cityscapes-dataset.com/.

  13. http://vision.middlebury.edu/stereo/data/.

References

  1. Yu J, Tan M, Zhang H, Tao D, Rui Y (2019) Hierarchical deep click feature prediction for fine-grained image recognition. IEEE Trans Pattern Anal Mach Intell. https://doi.org/10.1109/TPAMI.2019.2932058

    Article  Google Scholar 

  2. Yu J, Zhu C, Zhang J, Huang Q, Tao D (2019) Spatial pyramid-enhanced NetVLAD with weighted triplet loss for place recognition. IEEE Trans Neural Netw Learn Syst. https://doi.org/10.1109/TNNLS.2019.2908982

    Article  Google Scholar 

  3. Zhang J, Yu J, Tao D (2018) Local deep-feature alignment for unsupervised dimension reduction. IEEE Trans Image Process 27(5):2420–2432

    MathSciNet  MATH  Google Scholar 

  4. Yu J, Tao D, Wang M, Rui Y (2014) Learning to rank using user clicks and visual features for image retrieval. IEEE Trans Cybern 45(4):767–779

    Google Scholar 

  5. Yu J, Yang X, Gao F, Tao D (2016) Deep multimodal distance metric learning using click constraints for image ranking. IEEE Trans Cybern 47(12):4014–4024

    Google Scholar 

  6. Hong C, Yu J, Wan J, Tao D, Wang M (2015) Multimodal deep autoencoder for human pose recovery. IEEE Trans Image Process 24(12):5659–5670

    MathSciNet  MATH  Google Scholar 

  7. Hong C, Yu J, Tao D, Wang M (2014) Image-based three-dimensional human pose recovery by multiview locality-sensitive sparse retrieval. IEEE Trans Ind Electron 62(6):3742–3751

    Google Scholar 

  8. Yu J, Zhang B, Kuang Z, Lin D, Fan J (2016) iPrivacy: image privacy protection by identifying sensitive objects via deep multi-task learning. IEEE Trans Inf Forensics Secur 12(5):1005–1016

    Google Scholar 

  9. Hong C, Yu J, Zhang J, Jin X, Lee K-H (2018) Multi-modal face pose estimation with multi-task manifold deep learning. IEEE Trans Ind Inform 15(7):3952–3961

    Google Scholar 

  10. Elharrouss O, Abbad A, Moujahid D, Riffi J, Tairi H (2016) A block-based background model for moving object detection. Electron Lett Comput Vis Image Anal 15(3):0017–31

    Google Scholar 

  11. Abbad A, Elharrouss O, Abbad K, Tairi H (2018) Application of meemd in post-processing of dimensionality reduction methods for face recognition. IET Biom 8(1):59–68

    Google Scholar 

  12. Moujahid D, Elharrouss O, Tairi H (2018) Visual object tracking via the local soft cosine similarity. Pattern Recognit Lett 110:79–85

    Google Scholar 

  13. Muddala SM, Olsson R, Sjöström M (2016) Spatio-temporal consistent depth-image-based rendering using layered depth image and inpainting. EURASIP J Image Video Process 2016(1):9

    Google Scholar 

  14. Isogawa M, Mikami D, Iwai D, Kimata H, Sato K (2018) Mask optimization for image inpainting. IEEE Access 6:69728–69741

    Google Scholar 

  15. Ružić T, Pižurica A (2014) Context-aware patch-based image inpainting using markov random field modeling. IEEE Trans Image Process 24(1):444–456

    MathSciNet  MATH  Google Scholar 

  16. Jin KH, Ye JC (2015) Annihilating filter-based low-rank hankel matrix approach for image inpainting. IEEE Trans Image Process 24(11):3498–3511

    MathSciNet  MATH  Google Scholar 

  17. Kawai N, Sato T, Yokoya N (2015) Diminished reality based on image inpainting considering background geometry. IEEE Trans Vis Comput Gr 22(3):1236–1247

    Google Scholar 

  18. Guo Q, Gao S, Zhang X, Yin Y, Zhang C (2017) Patch-based image inpainting via two-stage low rank approximation. IEEE Trans Vis Comput Gr 24(6):2023–2036

    Google Scholar 

  19. Lu H, Liu Q, Zhang M, Wang Y, Deng X (2018) Gradient-based low rank method and its application in image inpainting. Multimed Tools Appl 77(5):5969–5993

    Google Scholar 

  20. Xue H, Zhang S, Cai D (2017) Depth image inpainting: improving low rank matrix completion with low gradient regularization. IEEE Trans Image Process 26(9):4311–4320

    MathSciNet  MATH  Google Scholar 

  21. Liu J, Yang S, Fang Y, Guo Z (2018) Structure-guided image inpainting using homography transformation. IEEE Trans Multimed 20(12):3252–3265

    Google Scholar 

  22. Ding D, Ram S, Rodríguez JJ (2018) Image inpainting using nonlocal texture matching and nonlinear filtering. IEEE Trans Image Process 28(4):1705–1719

    MathSciNet  Google Scholar 

  23. Duan J, Pan Z, Zhang B, Liu W, Tai X-C (2015) Fast algorithm for color texture image inpainting using the non-local CTV model. J Glob Optim 62(4):853–876

    MathSciNet  MATH  Google Scholar 

  24. Fan Q, Zhang L (2018) A novel patch matching algorithm for exemplar-based image inpainting. Multimed Tools Appl 77(9):10807–10821

    Google Scholar 

  25. Jiang W (2016) Rate-distortion optimized image compression based on image inpainting. Multimed Tools Appl 75(2):919–933

    Google Scholar 

  26. Alilou VK, Yaghmaee F (2017) Exemplar-based image inpainting using svd-based approximation matrix and multi-scale analysis. Multimed Tools Appl 76(5):7213–7234

    Google Scholar 

  27. Wang W, Jia Y (2017) Damaged region filling and evaluation by symmetrical exemplar-based image inpainting for thangka. EURASIP J Image Video Process 2017(1):38

    Google Scholar 

  28. Wei Y, Liu S (2016) Domain-based structure-aware image inpainting. Signal Image Video Process 10(5):911–919

    Google Scholar 

  29. Yao F (2018) Damaged region filling by improved criminisi image inpainting algorithm for thangka. Clust Comput 22:1–9

    Google Scholar 

  30. Zeng J, Fu X, Leng L, Wang C (2019) Image inpainting algorithm based on saliency map and gray entropy. Arabian J Sci Eng 44(4):3549–3558

    Google Scholar 

  31. Zhang D, Liang Z, Yang G, Li Q, Li L, Sun X (2018) A robust forgery detection algorithm for object removal by exemplar-based image inpainting. Multimed Tools Appl 77(10):11823–11842

    Google Scholar 

  32. Wali S, Zhang H, Chang H, Wu C (2019) A new adaptive boosting total generalized variation (TGV) technique for image denoising and inpainting. J Vis Commun Image Represent 59:39–51

    Google Scholar 

  33. Zhang Q, Lin J (2012) Exemplar-based image inpainting using color distribution analysis. J Inf Sci Eng 28(4):641–654

    MathSciNet  Google Scholar 

  34. Liu Y, Caselles V (2012) Exemplar-based image inpainting using multiscale graph cuts. IEEE Trans Image process 22(5):1699–1711

    MathSciNet  MATH  Google Scholar 

  35. Qin C, Chang C-C, Chiu Y-P (2013) A novel joint data-hiding and compression scheme based on SMVQ and image inpainting. IEEE Trans Image Process 23(3):969–978

    MathSciNet  MATH  Google Scholar 

  36. Ghorai M, Samanta S, Mandal S, Chanda B (2019) Multiple pyramids based image inpainting using local patch statistics and steering kernel feature. IEEE Trans Image Process 28(11):5495–5509

    MathSciNet  MATH  Google Scholar 

  37. Gao J, Zhu J, Nie K, Xu J An image inpainting method for interleaved 3D stacked image sensor, IEEE Sensors Journal

  38. Borole RP, Bonde SV (2007) Image restoration using prioritized exemplar inpainting with automatic patch optimization. J Inst Eng India Ser B 98(3):311–319

    Google Scholar 

  39. Hoeltgen L (2017) Understanding image inpainting with the help of the Helmholtz equation. Math Sci 11(1):73–77

    MathSciNet  MATH  Google Scholar 

  40. Zhang T, Gelman A, Laronga R (2017) Structure-and texture-based fullbore image reconstruction. Math Geosci 49(2):195–215

    Google Scholar 

  41. Li H, Luo W, Huang J (2017) Localization of diffusion-based inpainting in digital images. IEEE Trans Inf Forensics Secur 12(12):3050–3064

    Google Scholar 

  42. Li K, Wei Y, Yang Z, Wei W (2016) Image inpainting algorithm based on TV model and evolutionary algorithm. Soft Comput 20(3):885–893

    Google Scholar 

  43. Sridevi G, Kumar SS (2019) Image inpainting based on fractional-order nonlinear diffusion for image reconstruction. Circuit Syst Signal Process 38(8):1–16

    Google Scholar 

  44. Jin X, Su Y, Zou L, Wang Y, Jing P, Wang ZJ (2018) Sparsity-based image inpainting detection via canonical correlation analysis with low-rank constraints. IEEE Access 6:49967–49978

    Google Scholar 

  45. Mo J, Zhou Y (2018) The research of image inpainting algorithm using self-adaptive group structure and sparse representation. Cluster Comput 22(1):1–9

    Google Scholar 

  46. Yan Z, Li X, Li M, Zuo W, Shan S (2018) Shift-net: Image inpainting via deep feature rearrangement. In: Proceedings of the European conference on computer vision (ECCV), pp 1–17

  47. Weerasekera CS, Dharmasiri T, Garg R, Drummond T, Reid I (2018) Just-in-time reconstruction: inpainting sparse maps using single view depth predictors as priors. In: 2018 IEEE international conference on robotics and automation (ICRA), IEEE, pp 1–9

  48. Zhao J, Chen Z, Zhang L, Jin X (2018) Unsupervised learnable sinogram inpainting network (sin) for limited angle ct reconstruction. arXiv preprint arXiv:1811.03911

  49. Chang Y-L, Yu Liu Z, Hsu W (2019) VORNet: spatio-temporally consistent video inpainting for object removal. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 0–0

  50. Cai N, Su Z, Lin Z, Wang H, Yang Z, Ling BW-K (2017) Blind inpainting using the fully convolutional neural network. Vis Comput 33(2):249–261

    Google Scholar 

  51. Zhu X, Qian Y, Zhao X, Sun B, Sun Y (2018) A deep learning approach to patch-based image inpainting forensics. Signal Process Image Commun 67:90–99

    Google Scholar 

  52. Sidorov O, Hardeberg JY (2019) Deep hyperspectral prior: denoising, inpainting, super-resolution. arXiv preprint arXiv:1902.00301

  53. Zeng Y, Fu J, Chao H, Guo B (2019) Learning pyramid-context encoder network for high-quality image inpainting. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1486–1494

  54. Liu H, Jiang B, Xiao Y, Yang C (2019) Coherent semantic attention for image inpainting. arXiv preprint arXiv:1905.12384

  55. Pathak D, Krahenbuhl P, Donahue J, Darrell T, Efros AA (2016) Context encoders: feature learning by inpainting, In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2536–2544

  56. Sasaki K, Iizuka S, Simo-Serra E, Ishikawa H (2017) Joint gap detection and inpainting of line drawings. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5725–5733

  57. Hsu C, Chen F, Wang G (2017) High-resolution image inpainting through multiple deep networks, In: 2017 international conference on vision, image and signal processing (ICVISP), IEEE, pp 76–81

  58. Nakamura T, Zhu A, Yanai K, Uchida S (2017) Scene text eraser. In: 2017 14th IAPR international conference on document analysis and recognition (ICDAR), IEEE, Vol 1 pp 832–837

  59. Xiang P, Wang L, Cheng J, Zhang B, Wu J (2017) A deep network architecture for image inpainting. In: 2017 3rd IEEE international conference on computer and communications (ICCC), IEEE, pp 1851–1856

  60. Alilou VK, Yaghmaee F (2015) Application of GRNN neural network in non-texture image inpainting and restoration. Pattern Recognit Lett 62:24–31

    Google Scholar 

  61. Liao L, Hu R, Xiao J, Wang Z (2019) Artist-net: decorating the inferred content with unified style for image inpainting. IEEE Access 7:36921–36933

    Google Scholar 

  62. Cai X, Song B (2018) Semantic object removal with convolutional neural network feature-based inpainting approach. Multimed Syst 24(5):597–609

    Google Scholar 

  63. Hertz A, Fogel S, Hanocka R, Giryes R, Cohen-Or D (2019) Blind visual motif removal from a single image. arXiv preprint arXiv:1904.02756

  64. Zhao Y, Price B, Cohen S, Gurari D (2019) Guided image inpainting: replacing an image region by pulling content from another image. In: 2019 IEEE winter conference on applications of computer vision (WACV), IEEE, pp 1514–1523

  65. Su Y-Z, Liu T-J, Liu K-H, Liu H-H, Pei S-C (2019) Image inpainting for random areas using dense context features, In: 2019 IEEE international conference on image processing (ICIP), IEEE, pp 4679–4683

  66. Ma Y, Liu X, Bai S, Wang L, He D, Liu A (2019) Coarse-to-fine image inpainting via region-wise convolutions and non-local correlation, In: Proceedings of the 28th international joint conference on artificial intelligence, AAAI press, pp 3123–3129

  67. Ke J, Deng J, Lu Y (2019) Noise reduction with image inpainting: an application in clinical data diagnosis, In: ACM SIGGRAPH 2019 posters, ACM, p 88

  68. Liu S, Guo Z, Chen J, Yu T, Chen Z (2019) Interleaved zooming network for image inpainting, In: 2019 IEEE international conference on multimedia & expo workshops (ICMEW), IEEE, pp 673–678

  69. Guo Z, Chen Z, Yu T, Chen J, Liu S (2019) Progressive image inpainting with full-resolution residual network, In: Proceedings of the 27th ACM international conference on multimedia, ACM, pp 2496–2504

  70. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets, In: Advances in neural information processing systems, pp 2672–2680

  71. Chen Y, Hu H (2019) An improved method for semantic image inpainting with gans: progressive inpainting. Neural Process Lett 49(3):1355–1367

    Google Scholar 

  72. Li J, Song G, Zhang M (2008) Occluded offline handwritten Chinese character recognition using deep convolutional generative adversarial network and improved GoogLeNet. Neural Comput Appl 1–15

  73. Shin YG, Sagong MC, Yeo YJ, Kim SW, Ko SJ (2019) Pepsi++: fast and lightweight network for image inpainting. arXiv preprint arXiv:1905.09010

  74. Wang H, Jiao L, Wu H, Bie R (2019) New inpainting algorithm based on simplified context encoders and multi-scale adversarial network. Procedia Comput Sci 147:254–263

    Google Scholar 

  75. Wang C, Xu C, Wang C, Tao D (2018) Perceptual adversarial networks for image-to-image transformation. IEEE Trans Image Process 27(8):4066–4079

    MathSciNet  MATH  Google Scholar 

  76. Sagong M-c, Shin Y-g, Kim S-w, Park S, Ko S-j (2019) Pepsi: fast image inpainting with parallel decoding network, In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 11360–11368

  77. Dhamo H, Tateno K, Laina I, Navab N, Tombari F (2019) Peeking behind objects: layered depth prediction from a single image. Pattern Recognit Lett 125:333–340

    Google Scholar 

  78. Elharrouss O, Al-Maadeed N, Al-Maadeed S (2019) Video summarization based on motion detection for surveillance systems. In: 15th international wireless communications & mobile computing conference (IWCMC). IEEE, pp 366–371

  79. Almaadeed N, Elharrouss O, Al-Maadeed S, Bouridane A, Beghdadi A (2019) A novel approach for robust multi human action detection and recognition based on 3-dimentional convolutional neural networks. arXiv preprint arXiv:1907.11272

  80. Vitoria P, Sintes J, Ballester C (2018) Semantic image inpainting through improved Wasserstein generative adversarial networks. arXiv preprint arXiv:1812.01071

  81. Dong J, Yin R, Sun X, Li Q, Yang Y, Qin X (2018) Inpainting of remote sensing sst images with deep convolutional generative adversarial network. IEEE Geosci Remote Sens Lett 16(2):173–177

    Google Scholar 

  82. Lou S, Fan Q, Chen F, Wang C, Li J, (2018) Preliminary investigation on single remote sensing image inpainting through a modified gan, In: 2018 10th IAPR workshop on pattern recognition in remote sensing (PRRS). IEEE 1–6

  83. Salem N.M, Mahdi HM, Abbas H (2018) Semantic image inpainting vsing self-learning encoder-decoder and adversarial loss, In: 2018 13th international conference on computer engineering and systems (ICCES), IEEE, pp 103–108

  84. Liu H, Lu G, Bi X, Yan J, Wang W (2018) Image inpainting based on generative adversarial networks, In: 2018 14th international conference on natural computation, fuzzy systems and knowledge discovery (ICNC-FSKD), IEEE, pp 373–378

  85. Han X, Wu Z, Huang W, Scott MR, Davis LS (2019) Compatible and diverse fashion image inpainting. arXiv preprint arXiv:1902.01096

  86. Jiao L, Wu H, Wang H, Bie R (2019) Multi-scale semantic image inpainting with residual learning and GAN. Neurocomputing 331:199–212

    Google Scholar 

  87. Nazeri K, Ng E, Joseph T, Qureshi F, Ebrahimi M (2019) Edgeconnect: generative image inpainting with adversarial edge learning. arXiv preprint arXiv:1901.00212

  88. Li A, Qi J, Zhang R, Ma X, Ramamohanarao K (2019) Generative image inpainting with submanifold alignment. arXiv preprint arXiv:1908.00211

  89. Li A, Qi J, Zhang R, Kotagiri R (2019) Boosted gan with semantically interpretable information for image inpainting, In: 2019 international joint conference on neural networks (IJCNN), IEEE, pp 1–8

  90. Armanious K, Mecky Y, Gatidis S, Yang B (2019) Adversarial inpainting of medical image modalities, In: ICASSP 2019–2019 IEEE international conference on acoustics, speech and signal processing (ICASSP) IEEE, pp 3267–3271

  91. Yeh R A, Chen C, Yian Lim T, Schwing AG, Hasegawa-Johnson M, Do MN (2017) Semantic image inpainting with deep generative models, In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5485–5493

  92. Yuan L, Ruan C, Hu H, Chen D (2019) Image inpainting based on patch-GANs. IEEE Access 7:46411–46421

    Google Scholar 

  93. Doersch C, Singh S, Gupta A, Sivic J, Efros AA (2015) What makes paris look like paris? Commun ACM 58(12):103–110

    Google Scholar 

  94. Zhou B, Lapedriza A, Khosla A, Oliva A, Torralba A (2017) Places: a 10 million image database for scene recognition. IEEE Trans Pattern Anal Mach Intell 40(6):1452–1464

    Google Scholar 

  95. Xiong W, Yu J, Lin Z, Yang J, Lu X, Barnes C, Luo J (2019) Foreground-aware image inpainting. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5840–5848

  96. Martin D, Fowlkes C, Tal D, Malik J, et al (2001) A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics, Iccv Vancouver

  97. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M et al (2015) Imagenet large scale visual recognition challenge. Int J Comput vis 115(3):211–252

    MathSciNet  Google Scholar 

  98. Liu Z, Luo P, Wang X, Tang X (2018) Large-scale celebfaces attributes (celeba) dataset, Retrieved August 15

  99. Baumgardner M F, Biehl LL, Landgrebe DA (2015) 220 band aviris hyperspectral image data set: June 12, 1992 Indian pine test site 3. Purdue University Research Repository 10: R7RX991C

  100. Lin TY, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick CL (2014) Microsoft coco: common objects in context. In: European conference on computer vision. Springer, Cham, pp 740–755

    Google Scholar 

  101. Karatzas D, Shafait F, Uchida S, Iwamura M, i Bigorda LG, Mestre SR, Mas J, Mota DF, Almazan JA, De Las Heras LP (2013) ICDAR 2013 robust reading competition. In: 2013 12th international conference on document analysis and recognition, IEEE, pp 1484–1493

  102. McCormac J, Handa A, Leutenegger S, Davison AJ (2016) Scenenet RGB-D: 5m photorealistic images of synthetic indoor trajectories with ground truth. arXiv preprint arXiv:1612.05079

  103. Krause J, Stark M, Deng J, Fei-Fei L (2013) 3D object representations for fine-grained categorization, In: Proceedings of the IEEE international conference on computer vision workshops, pp 554–561

  104. Cordts M, Omran M, Ramos S, Rehfeld T, Enzweiler M, Benenson R, Franke U, Roth S, Schiele B (2016) The cityscapes dataset for semantic urban scene understanding. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3213–3223

  105. Hirschmuller H, Scharstein D (2007) Evaluation of cost functions for stereo matching, In: 2007 IEEE conference on computer vision and pattern recognition, IEEE, pp 1–8

  106. Scharstein D, Hirschmüller H, Kitajima Y, Krathwohl G, Nešić N, Wang X, Westling P (2014) High-resolution stereo datasets with subpixel-accurate ground truth. In: German conference on pattern recognition. Springer, cham, pp 31–42

  107. Jboor NH, Belhi A, Al-Ali AK, Bouras A, Jaoua A (2019) Towards an inpainting framework for visual cultural heritage. In: 2019 IEEE Jordan international joint conference on electrical engineering and information technology (JEEIT), IEEE, pp 602–607

  108. Elharrouss O, Abbad A, Moujahid D, Tairi H (2017) Moving object detection zone using a block-based background model. IET Comput Vis 12(1):86–94

    Google Scholar 

Download references

Acknowledgements

This publication was made by NPRP grant # NPRP8-140-2-065 from the Qatar National Research Fund (a member of the Qatar Foundation). The statements made herein are solely the responsibility of the authors.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Omar Elharrouss.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Elharrouss, O., Almaadeed, N., Al-Maadeed, S. et al. Image Inpainting: A Review . Neural Process Lett 51, 2007–2028 (2020). https://doi.org/10.1007/s11063-019-10163-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-019-10163-0

Keywords

Navigation