Advertisement

Signal, Image and Video Processing

, Volume 13, Issue 3, pp 601–608 | Cite as

Toward an objective benchmark for video completion

  • Alexander BokovEmail author
  • Dmitriy Vatolin
  • Mikhail Erofeev
  • Yury Gitman
Original Paper
  • 78 Downloads

Abstract

Video-completion methods aim to complete selected regions of a video sequence in a natural looking manner with little to no additional user interaction. Numerous algorithms were proposed to solve this problem; however, a unified benchmark to quantify the progress in the field is still lacking. Video-completion results are usually judged by their plausibility and aren’t expected to adhere to one ground-truth result, which complicates measuring the video-completion performance. In this paper, we address this problem by proposing a set of full-reference quality metrics that outperform naïve approaches and an online benchmark for video-completion algorithms. We construct seven test sequences with ground-truth video-completion results by composing various foreground objects over a set of background videos. Using this dataset, we conduct an extensive comparative study of video-completion perceptual quality involving six algorithms and over 300 human participants. Finally, we show that by relaxing the requirement of complete adherence to ground truth and by taking into account temporal consistency we can increase the correlation of objective quality metrics with perceptual completion quality on the proposed dataset.

Keywords

Video completion Inpainting Performance evaluation 

References

  1. 1.
    Anbarjafari, G.: An objective no-reference measure of illumination assessment. Meas. Sci. Rev. 15(6), 319–322 (2015)CrossRefGoogle Scholar
  2. 2.
    Barnes, C., Shechtman, E., Finkelstein, A., Goldman, D.: Patchmatch: A randomized correspondence algorithm for structural image editing. ACM Trans. Graph. (TOG) 28(3), 24 (2009)CrossRefGoogle Scholar
  3. 3.
    Benoit, J., Paquette, E.: Localized search for high definition video completion. J. WSCG (2015)Google Scholar
  4. 4.
  5. 5.
    Čadík, M., Herzog, R., Mantiuk, R., Myszkowski, K., Seidel, H.P.: New measurements reveal weaknesses of image quality metrics in evaluating graphics artifacts. ACM Trans. Graph. (TOG) 31(6), 147 (2012)Google Scholar
  6. 6.
    Chen, Y., Hu, Y., Au, O.C., Li, H., Chen, C.W.: Video error concealment using spatio-temporal boundary matching and partial differential equation. IEEE Trans. Multimed. 10(1), 2–15 (2008)CrossRefGoogle Scholar
  7. 7.
    Cheng, E., Burton, P., Burton, J., Joseski, A., Burnett, I.: RMIT3DV:Pre-announcement of a creative commons uncompressed HD 3D video database. In: Fourth International Workshop on Quality of Multimedia Experience (QoMEX), pp. 212–217 (2012)Google Scholar
  8. 8.
    Ebdelli, M., Meur, O.L., Guillemot, C.: Video inpainting with short-term windows: application to object removal and error concealment. IEEE Trans. Image Process. (TIP) 24(10), 3034–3047 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Erofeev, M., Gitman, Y., Vatolin, D., Fedorov, A., Wang, J.: Perceptually motivated benchmark for video matting. In: British Machine Vision Conference (BMVC) (2015)Google Scholar
  10. 10.
    Erofeev, M., Vatolin, D.: Automatic logo removal for semitransparent and animated logos. Proceedings of GraphiCon 2011, 26–30 (2011)Google Scholar
  11. 11.
    Granados, M., Tompkin, J., Kim, K., Grau, O., Kautz, J., Theobalt, C.: How not to be seen–object removal from videos of crowded scenes. Comput. Graph. Forum 31, 219–228 (2012)CrossRefGoogle Scholar
  12. 12.
    He, K., Sun, J.: Statistics of patch offsets for image completion. In: European Conference on Computer Vision (ECCV), pp. 16–29 (2012)Google Scholar
  13. 13.
    Huang, J.B., Kang, S.B., Ahuja, N., Kopf, J.: Image completion using planar structure guidance. ACM Trans. Graph. (TOG) 33(4), 129 (2014)Google Scholar
  14. 14.
    Ilan, S., Shamir, A.: A survey on data-driven video completion. Comput. Graph. Forum 34, 60–85 (2015)CrossRefGoogle Scholar
  15. 15.
    Koloda, J., Ostergaard, J., Jensen, S.H., Peinado, A.M., Sanchez, V.: Sequential error concealment for video/images by weighted template matching. In: Data Compression Conference (DCC), pp. 159–168 (2012)Google Scholar
  16. 16.
    Koloda, J., Ostergaard, J., Jensen, S.H., Sanchez, V., Peinado, A.M.: Sequential error concealment for video/images by sparse linear prediction. IEEE Trans. Multimed. 15(4), 957–969  (2013)CrossRefGoogle Scholar
  17. 17.
    Mosleh, A., Bouguila, N., Hamza, A.B.: Video completion using bandlet transform. IEEE Trans. Multimed. 14(6), 1591–1601 (2012)CrossRefGoogle Scholar
  18. 18.
    Mosleh, A., Bouguila, N., Hamza, A.B.: Automatic inpainting scheme for video text detection and removal. IEEE Trans. Image Process. (TIP) 22(11), 4460–4472 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  19. 19.
    Mosleh, A., Bouguila, N., Hamza, A.B.: Bandlet-based sparsity regularization in video inpainting. J. Vis. Commun. Image Represent. 25(5), 855–863 (2014)CrossRefGoogle Scholar
  20. 20.
    Newson, A., Almansa, A., Fradet, M., Gousseau, Y., Pérez, P.: Video inpainting of complex scenes. SIAM J. Imaging Sci. 7(4), 1993–2019 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
  22. 22.
    Seshadrinathan, K., Bovik, A.C.: Motion tuned spatio-temporal quality assessment of natural videos. IEEE Trans. Image Process. (TIP) 19(2), 335–350 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  23. 23.
    Shiratori, T., Matsushita, Y., Tang, X., Kang, S.B.: Video completion by motion field transfer. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) 1, 411–418 (2006)Google Scholar
  24. 24.
    Subjectify.us. http://subjectify.us
  25. 25.
    Telea, A.: An image inpainting technique based on the fast marching method. J. Graph. Tools 9(1), 23–34 (2004)CrossRefGoogle Scholar
  26. 26.
  27. 27.
    Thurstone, L.L.: A law of comparative judgment. Psychol. Rev. 34(4), 273 (1927)CrossRefGoogle Scholar
  28. 28.
    Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. (TIP) 13(4), 600–612 (2004)CrossRefGoogle Scholar
  29. 29.
    Wang, Z., Simoncelli, E.P., Bovik, A.C.: Multiscale structural similarity for image quality assessment. Conference Record of the Thirty-Seventh Asilomar Conference on Signals, Systems and Computers 2, 1398–1402 (2003)Google Scholar
  30. 30.
    Wexler, Y., Shechtman, E., Irani, M.: Space-time completion of video. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 29(3), 463–476 (2007)CrossRefGoogle Scholar
  31. 31.
    Yan, W.Q., Wang, J., Kankanhalli, M.S.: Automatic video logo detection and removal. Multimed. Syst. 10(5), 379–391 (2005)CrossRefGoogle Scholar
  32. 32.
    You, S., Tan, R.T., Kawakami, R., Ikeuchi, K.: Robust and fast motion estimation for video completion. In: International Conference on Machine Vision Applications (MVA), pp. 181–184 (2013)Google Scholar
  33. 33.

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2018

Authors and Affiliations

  1. 1.Graphics and Media LabLomonosov Moscow State UniversityMoscowRussia

Personalised recommendations