Toward an objective benchmark for video completion


Video-completion methods aim to complete selected regions of a video sequence in a natural looking manner with little to no additional user interaction. Numerous algorithms were proposed to solve this problem; however, a unified benchmark to quantify the progress in the field is still lacking. Video-completion results are usually judged by their plausibility and aren’t expected to adhere to one ground-truth result, which complicates measuring the video-completion performance. In this paper, we address this problem by proposing a set of full-reference quality metrics that outperform naïve approaches and an online benchmark for video-completion algorithms. We construct seven test sequences with ground-truth video-completion results by composing various foreground objects over a set of background videos. Using this dataset, we conduct an extensive comparative study of video-completion perceptual quality involving six algorithms and over 300 human participants. Finally, we show that by relaxing the requirement of complete adherence to ground truth and by taking into account temporal consistency we can increase the correlation of objective quality metrics with perceptual completion quality on the proposed dataset.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7


  1. 1.

    Anbarjafari, G.: An objective no-reference measure of illumination assessment. Meas. Sci. Rev. 15(6), 319–322 (2015)

    Article  Google Scholar 

  2. 2.

    Barnes, C., Shechtman, E., Finkelstein, A., Goldman, D.: Patchmatch: A randomized correspondence algorithm for structural image editing. ACM Trans. Graph. (TOG) 28(3), 24 (2009)

    Article  Google Scholar 

  3. 3.

    Benoit, J., Paquette, E.: Localized search for high definition video completion. J. WSCG (2015)

  4. 4.


  5. 5.

    Čadík, M., Herzog, R., Mantiuk, R., Myszkowski, K., Seidel, H.P.: New measurements reveal weaknesses of image quality metrics in evaluating graphics artifacts. ACM Trans. Graph. (TOG) 31(6), 147 (2012)

    Google Scholar 

  6. 6.

    Chen, Y., Hu, Y., Au, O.C., Li, H., Chen, C.W.: Video error concealment using spatio-temporal boundary matching and partial differential equation. IEEE Trans. Multimed. 10(1), 2–15 (2008)

    Article  Google Scholar 

  7. 7.

    Cheng, E., Burton, P., Burton, J., Joseski, A., Burnett, I.: RMIT3DV:Pre-announcement of a creative commons uncompressed HD 3D video database. In: Fourth International Workshop on Quality of Multimedia Experience (QoMEX), pp. 212–217 (2012)

  8. 8.

    Ebdelli, M., Meur, O.L., Guillemot, C.: Video inpainting with short-term windows: application to object removal and error concealment. IEEE Trans. Image Process. (TIP) 24(10), 3034–3047 (2015)

    MathSciNet  Article  MATH  Google Scholar 

  9. 9.

    Erofeev, M., Gitman, Y., Vatolin, D., Fedorov, A., Wang, J.: Perceptually motivated benchmark for video matting. In: British Machine Vision Conference (BMVC) (2015)

  10. 10.

    Erofeev, M., Vatolin, D.: Automatic logo removal for semitransparent and animated logos. Proceedings of GraphiCon 2011, 26–30 (2011)

    Google Scholar 

  11. 11.

    Granados, M., Tompkin, J., Kim, K., Grau, O., Kautz, J., Theobalt, C.: How not to be seen–object removal from videos of crowded scenes. Comput. Graph. Forum 31, 219–228 (2012)

    Article  Google Scholar 

  12. 12.

    He, K., Sun, J.: Statistics of patch offsets for image completion. In: European Conference on Computer Vision (ECCV), pp. 16–29 (2012)

  13. 13.

    Huang, J.B., Kang, S.B., Ahuja, N., Kopf, J.: Image completion using planar structure guidance. ACM Trans. Graph. (TOG) 33(4), 129 (2014)

    Google Scholar 

  14. 14.

    Ilan, S., Shamir, A.: A survey on data-driven video completion. Comput. Graph. Forum 34, 60–85 (2015)

    Article  Google Scholar 

  15. 15.

    Koloda, J., Ostergaard, J., Jensen, S.H., Peinado, A.M., Sanchez, V.: Sequential error concealment for video/images by weighted template matching. In: Data Compression Conference (DCC), pp. 159–168 (2012)

  16. 16.

    Koloda, J., Ostergaard, J., Jensen, S.H., Sanchez, V., Peinado, A.M.: Sequential error concealment for video/images by sparse linear prediction. IEEE Trans. Multimed. 15(4), 957–969  (2013)

    Article  Google Scholar 

  17. 17.

    Mosleh, A., Bouguila, N., Hamza, A.B.: Video completion using bandlet transform. IEEE Trans. Multimed. 14(6), 1591–1601 (2012)

    Article  Google Scholar 

  18. 18.

    Mosleh, A., Bouguila, N., Hamza, A.B.: Automatic inpainting scheme for video text detection and removal. IEEE Trans. Image Process. (TIP) 22(11), 4460–4472 (2013)

    MathSciNet  Article  MATH  Google Scholar 

  19. 19.

    Mosleh, A., Bouguila, N., Hamza, A.B.: Bandlet-based sparsity regularization in video inpainting. J. Vis. Commun. Image Represent. 25(5), 855–863 (2014)

    Article  Google Scholar 

  20. 20.

    Newson, A., Almansa, A., Fradet, M., Gousseau, Y., Pérez, P.: Video inpainting of complex scenes. SIAM J. Imaging Sci. 7(4), 1993–2019 (2014)

    MathSciNet  Article  MATH  Google Scholar 

  21. 21.

    Pixel Farm PFClean.

  22. 22.

    Seshadrinathan, K., Bovik, A.C.: Motion tuned spatio-temporal quality assessment of natural videos. IEEE Trans. Image Process. (TIP) 19(2), 335–350 (2010)

    MathSciNet  Article  MATH  Google Scholar 

  23. 23.

    Shiratori, T., Matsushita, Y., Tang, X., Kang, S.B.: Video completion by motion field transfer. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) 1, 411–418 (2006)

    Google Scholar 

  24. 24.

  25. 25.

    Telea, A.: An image inpainting technique based on the fast marching method. J. Graph. Tools 9(1), 23–34 (2004)

    Article  Google Scholar 

  26. 26.

    The Foundry Nuke.

  27. 27.

    Thurstone, L.L.: A law of comparative judgment. Psychol. Rev. 34(4), 273 (1927)

    Article  Google Scholar 

  28. 28.

    Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. (TIP) 13(4), 600–612 (2004)

    Article  Google Scholar 

  29. 29.

    Wang, Z., Simoncelli, E.P., Bovik, A.C.: Multiscale structural similarity for image quality assessment. Conference Record of the Thirty-Seventh Asilomar Conference on Signals, Systems and Computers 2, 1398–1402 (2003)

    Google Scholar 

  30. 30.

    Wexler, Y., Shechtman, E., Irani, M.: Space-time completion of video. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 29(3), 463–476 (2007)

    Article  Google Scholar 

  31. 31.

    Yan, W.Q., Wang, J., Kankanhalli, M.S.: Automatic video logo detection and removal. Multimed. Syst. 10(5), 379–391 (2005)

    Article  Google Scholar 

  32. 32.

    You, S., Tan, R.T., Kawakami, R., Ikeuchi, K.: Robust and fast motion estimation for video completion. In: International Conference on Machine Vision Applications (MVA), pp. 181–184 (2013)

  33. 33.

    YUVSoft Background Reconstruction.

Download references

Author information



Corresponding author

Correspondence to Alexander Bokov.

Additional information

This study was funded by the RFBR under research project 15-01-08632 A.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Bokov, A., Vatolin, D., Erofeev, M. et al. Toward an objective benchmark for video completion. SIViP 13, 601–608 (2019).

Download citation


  • Video completion
  • Inpainting
  • Performance evaluation