Advertisement

Flow-edge Guided Video Completion

Conference paper
  • 1.1k Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12357)

Abstract

We present a new flow-based video completion algorithm. Previous flow completion methods are often unable to retain the sharpness of motion boundaries. Our method first extracts and completes motion edges, and then uses them to guide piecewise-smooth flow completion with sharp edges. Existing methods propagate colors among local flow connections between adjacent frames. However, not all missing regions in a video can be reached in this way because the motion boundaries form impenetrable barriers. Our method alleviates this problem by introducing non-local flow connections to temporally distant frames, enabling propagating video content over motion boundaries. We validate our approach on the DAVIS dataset. Both visual and quantitative results show that our method compares favorably against the state-of-the-art algorithms.

Supplementary material

504453_1_En_42_MOESM1_ESM.zip (92.9 mb)
Supplementary material 1 (zip 95095 KB)

References

  1. 1.
    Barnes, C., Shechtman, E., Finkelstein, A., Goldman, D.B.: PatchMatch: a randomized correspondence algorithm for structural image editing. In: ACM TOG (Proceedings of the SIGGRAPH), vol. 28, p. 24 (2009)Google Scholar
  2. 2.
    Bhat, P., Zitnick, C.L., Cohen, M.F., Curless, B.: GradientShop: a gradient-domain optimization framework for image and video filtering. ACM TOG (Proc. SIGGRAPH) 29(2), 10-1 (2010)Google Scholar
  3. 3.
    Bokov, A., Vatolin, D.: 100+ times faster video completion by optical-flow-guided variational refinement. In: ICIP (2018)Google Scholar
  4. 4.
    Canny, J.: A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 679–698 (1986)Google Scholar
  5. 5.
    Chang, Y.L., Liu, Z.Y., Hsu, W.: Free-form video inpainting with 3D gated convolution and temporal PatchGAN. In: ICCV (2019)Google Scholar
  6. 6.
    Chen, T., Zhu, J.Y., Shamir, A., Hu, S.M.: Motion-aware gradient domain video composition. TIP 22(7), 2532–2544 (2013)Google Scholar
  7. 7.
    Criminisi, A., Perez, P., Toyama, K.: Object removal by exemplar-based inpainting. In: CVPR (2003)Google Scholar
  8. 8.
    Darabi, S., Shechtman, E., Barnes, C., Goldman, D.B., Sen, P.: Image melding: combining inconsistent images using patch-based synthesis. ACM TOG (Proc. SIGGRAPH) 31(4), 82-1 (2012)Google Scholar
  9. 9.
    Drori, I., Cohen-Or, D., Yeshurun, H.: Fragment-based image completion. In: ACM TOG (Proceedings of the SIGGRAPH), vol. 22, pp. 303–312 (2003)Google Scholar
  10. 10.
    Gao, C., Moore, B.E., Nadakuditi, R.R.: Augmented robust PCA for foreground-background separation on noisy, moving camera video. In: 2017 IEEE Global Conference on Signal and Information Processing (GlobalSIP) (2017)Google Scholar
  11. 11.
    Granados, M., Kim, K.I., Tompkin, J., Kautz, J., Theobalt, C.: Background inpainting for videos with dynamic objects and a free-moving camera. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7572, pp. 682–695. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-33718-5_49CrossRefGoogle Scholar
  12. 12.
    He, K., Sun, J.: Image completion approaches using the statistics of similar patches. TPAMI 36(12), 2423–2435 (2014)CrossRefGoogle Scholar
  13. 13.
    Huang, J.B., Kang, S.B., Ahuja, N., Kopf, J.: Image completion using planar structure guidance. ACM TOG (Proc. SIGGRAPH) 33(4), 129 (2014)Google Scholar
  14. 14.
    Huang, J.B., Kang, S.B., Ahuja, N., Kopf, J.: Temporally coherent completion of dynamic video. ACM Trans. Graph. (TOG) (2016)Google Scholar
  15. 15.
    Huang, J.B., Kopf, J., Ahuja, N., Kang, S.B.: Transformation guided image completion. In: ICCP (2013)Google Scholar
  16. 16.
    Iizuka, S., Simo-Serra, E., Ishikawa, H.: Globally and locally consistent image completion. ACM TOG (Proc. SIGGRAPH) 36(4), 107 (2017)Google Scholar
  17. 17.
    Ilan, S., Shamir, A.: A survey on data-driven video completion. Comput. Graph. Forum 34, 60–85 (2015)CrossRefGoogle Scholar
  18. 18.
    Ilg, E., Mayer, N., Saikia, T., Keuper, M., Dosovitskiy, A., Brox, T.: FlowNet 2.0: evolution of optical flow estimation with deep networks. In: CVPR (2017)Google Scholar
  19. 19.
    Kay, W., et al.: The kinetics human action video dataset. arXiv preprint arXiv:1705.06950 (2017)
  20. 20.
    Kim, D., Woo, S., Lee, J.Y., Kweon, I.S.: Deep video inpainting. In: CVPR (2019)Google Scholar
  21. 21.
    Kopf, J., Langguth, F., Scharstein, D., Szeliski, R., Goesele, M.: Image-based rendering in the gradient domain. ACM TOG (Proc. SIGGRAPH) 32(6), 199 (2013)Google Scholar
  22. 22.
    Lee, S., Oh, S.W., Won, D., Kim, S.J.: Copy-and-paste networks for deep video inpainting. In: ICCV (2019)Google Scholar
  23. 23.
    Liu, G., Reda, F.A., Shih, K.J., Wang, T.-C., Tao, A., Catanzaro, B.: Image inpainting for irregular holes using partial convolutions. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11215, pp. 89–105. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01252-6_6CrossRefGoogle Scholar
  24. 24.
    Mansfield, A., Prasad, M., Rother, C., Sharp, T., Kohli, P., Van Gool, L.J.: Transforming image completion. In: BMVC (2011)Google Scholar
  25. 25.
    Nazeri, K., Ng, E., Joseph, T., Qureshi, F., Ebrahimi, M.: EdgeConnect: generative image inpainting with adversarial edge learning. In: ICCVW (2019)Google Scholar
  26. 26.
    Newson, A., Almansa, A., Fradet, M., Gousseau, Y., Pérez, P.: Video inpainting of complex scenes. SIAM J. Imaging Sci. (2014)Google Scholar
  27. 27.
    Oh, S.W., Lee, S., Lee, J.Y., Kim, S.J.: Onion-peel networks for deep video completion. In: ICCV (2019)Google Scholar
  28. 28.
    Okabe, M., Noda, K., Dobashi, Y., Anjyo, K.: Interactive video completion. IEEE Comput. Graph. Appl. (2019)Google Scholar
  29. 29.
    Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: feature learning by inpainting. In: CVPR (2016)Google Scholar
  30. 30.
    Perazzi, F., Pont-Tuset, J., McWilliams, B., Van Gool, L., Gross, M., Sorkine-Hornung, A.: A benchmark dataset and evaluation methodology for video object segmentation. In: CVPR (2016)Google Scholar
  31. 31.
    Pérez, P., Gangnet, M., Blake, A.: Poisson image editing. ACM TOG (Proc. SIGGRAPH) 22(3), 313–318 (2003)Google Scholar
  32. 32.
    Pritch, Y., Kav-Venaki, E., Peleg, S.: Shift-map image editing. In: ICCV (2009)Google Scholar
  33. 33.
    Ren, Y., Yu, X., Zhang, R., Li, T.H., Liu, S., Li, G.: StructureFlow: image inpainting via structure-aware appearance flow. In: CVPR, pp. 181–190 (2019)Google Scholar
  34. 34.
    Roxas, M., Shiratori, T., Ikeuchi, K.: Video completion via spatio-temporally consistent motion inpainting. IPSJ Trans. Comput. Vis. Appl. (2014)Google Scholar
  35. 35.
    Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: Orb: an efficient alternative to sift or surf. In: ICCV (2011)Google Scholar
  36. 36.
    Strobel, M., Diebold, J., Cremers, D.: Flow and color inpainting for video completion. In: Jiang, X., Hornegger, J., Koch, R. (eds.) GCPR 2014. LNCS, vol. 8753, pp. 293–304. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-11752-2_23CrossRefGoogle Scholar
  37. 37.
    Szeliski, R., Uyttendaele, M., Steedly, D.: Fast poisson blending using multi-splines. In: ICCP, pp. 1–8 (2011)Google Scholar
  38. 38.
    Wang, C., Huang, H., Han, X., Wang, J.: Video inpainting by jointly learning temporal structure and spatial details. In: AAAI (2019)Google Scholar
  39. 39.
    Wexler, Y., Shechtman, E., Irani, M.: Space-time completion of video. TPAMI 3, 463–476 (2007)CrossRefGoogle Scholar
  40. 40.
    Xie, C., et al.: Image inpainting with learnable bidirectional attention maps. In: ICCV (2019)Google Scholar
  41. 41.
    Xiong, W., et al.: Foreground-aware image inpainting. In: CVPR (2019)Google Scholar
  42. 42.
    Xu, R., Li, X., Zhou, B., Loy, C.C.: Deep flow-guided video inpainting. In: CVPR (2019)Google Scholar
  43. 43.
    Yan, Z., Li, X., Li, M., Zuo, W., Shan, S.: Shift-Net: image inpainting via deep feature rearrangement. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) Computer Vision – ECCV 2018. LNCS, vol. 11218, pp. 3–19. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01264-9_1CrossRefGoogle Scholar
  44. 44.
    Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T.S.: Free-form image inpainting with gated convolution. arXiv preprint arXiv:1806.03589 (2018)
  45. 45.
    Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T.S.: Generative image inpainting with contextual attention. In: CVPR (2018)Google Scholar
  46. 46.
    Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: CVPR (2018)Google Scholar
  47. 47.
    Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., Torralba, A.: Places: a 10 million image database for scene recognition. IEEE Trans. Pattern Anal. Mach. Intell. 40(6), 1452–1464 (2017)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Virginia TechBlacksburgUSA
  2. 2.FacebookSeattleUSA

Personalised recommendations