Advertisement

Background Inpainting for Videos with Dynamic Objects and a Free-Moving Camera

  • Miguel Granados
  • Kwang In Kim
  • James Tompkin
  • Jan Kautz
  • Christian Theobalt
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7572)

Abstract

We propose a method for removing marked dynamic objects from videos captured with a free-moving camera, so long as the objects occlude parts of the scene with a static background. Our approach takes as input a video, a mask marking the object to be removed, and a mask marking the dynamic objects to remain in the scene. To inpaint a frame, we align other candidate frames in which parts of the missing region are visible. Among these candidates, a single source is chosen to fill each pixel so that the final arrangement is color-consistent. Intensity differences between sources are smoothed using gradient domain fusion. Our frame alignment process assumes that the scene can be approximated using piecewise planar geometry: A set of homographies is estimated for each frame pair, and one each is selected for aligning pixels such that the color-discrepancy is minimized and the epipolar constraints are maintained. We provide experimental validation with several real-world video sequences to demonstrate that, unlike in previous work, inpainting videos shot with free-moving cameras does not necessarily require estimation of absolute camera positions and per-frame per-pixel depth maps.

Keywords

video processing video completion video inpainting image alignment background estimation free-camera graph-cuts 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bhat, P., Zitnick, C.L., Snavely, N., Agarwala, A., Agrawala, M., Cohen, M.F., Curless, B., Kang, S.B.: Using photographs to enhance videos of a static scene. In: Rendering Techniques, pp. 327–338 (2007)Google Scholar
  2. 2.
    Shum, H., Kang, S.B.: Review of image-based rendering techniques. In: VCIP, pp. 2–13 (2000)Google Scholar
  3. 3.
    Debevec, P.E., Yu, Y., Borshukov, G.: Efficient view-dependent image-based rendering with projective texture-mapping. In: Rendering Techniques, pp. 105–116 (1998)Google Scholar
  4. 4.
    Torr, P.H.S., Fitzgibbon, A.W., Zisserman, A.: The problem of degeneracy in structure and motion recovery from uncalibrated image sequences. IJCV 32, 27–44 (1999)CrossRefGoogle Scholar
  5. 5.
    Pollefeys, M., Verbiest, F., Van Gool, L.: Surviving Dominant Planes in Uncalibrated Structure and Motion Recovery. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002, Part II. LNCS, vol. 2351, pp. 837–851. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  6. 6.
    Patwardhan, K.A., Sapiro, G., Bertalmio, M.: Video inpainting of occluding and occluded objects. In: Proc. ICIP, pp. 69–72 (2005)Google Scholar
  7. 7.
    Patwardhan, K., Sapiro, G., Bertalmio, M.: Video inpainting under constrained camera motion. IEEE TIP 16, 545–553 (2007)MathSciNetGoogle Scholar
  8. 8.
    Shih, T.K., Tang, N.C., Hwang, J.N.: Exemplar-based video inpainting without ghost shadow artifacts by maintaining temporal continuity. IEEE Trans. Circuits Syst. Video Techn. 19, 347–360 (2009)CrossRefGoogle Scholar
  9. 9.
    Wexler, Y., Shechtman, E., Irani, M.: Space-time completion of video. IEEE TPAMI 29, 463–476 (2007)CrossRefGoogle Scholar
  10. 10.
    Shen, Y., Lu, F., Cao, X., Foroosh, H.: Video completion for perspective camera under constrained motion. In: Proc. ICIP, vol. 3, pp. 63–66 (2006)Google Scholar
  11. 11.
    Hu, Y., Rajan, D.: Hybrid shift map for video retargeting. In: Proc. IEEE CVPR, pp. 577–584 (2010)Google Scholar
  12. 12.
    Granados, M., Tompkin, J., Kim, K.I., Grau, O., Kautz, J., Theobalt, C.: How not to be seen - object removal from videos of crowded scenes. Computer Graphics Forum 31, 219–228 (2012)CrossRefGoogle Scholar
  13. 13.
    Venkatesh, M.V., Cheung, S.S., Zhao, J.: Efficient object-based video inpainting. Pattern Recognition Letters 30, 168–179 (2009)CrossRefGoogle Scholar
  14. 14.
    Shih, T.K., Tan, N.C., Tsai, J.C.H.-Y.Z.: Video falsifying by motion interpolation and inpainting. In: Proc. IEEE CVPR, pp. 1–8 (2008)Google Scholar
  15. 15.
    Jia, J., Tai, Y.W., Wu, T.P., Tang, C.K.: Video repairing under variable illumination using cyclic motions. IEEE TPAMI 28, 832–839 (2006)CrossRefGoogle Scholar
  16. 16.
    Ling, C.H., Lin, C.W., Su, C.W., Liao, H.Y.M., Chen, Y.S.: Video object inpainting using posture mapping. In: Proc. ICIP, pp. 2785–2788 (2009)Google Scholar
  17. 17.
    Snavely, N., Seitz, S.M., Szeliski, R.: Photo tourism: exploring photo collections in 3d. ACM Trans. Graph. 25, 835–846 (2006)CrossRefGoogle Scholar
  18. 18.
    Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: IJCAI, pp. 674–679 (1981)Google Scholar
  19. 19.
    Shi, J., Tomasi, C.: Good features to track. In: Proc. IEEE CVPR, pp. 593–600 (1994)Google Scholar
  20. 20.
    Bay, H., Ess, A., Tuytelaars, T., Gool, L.J.V.: Speeded-up robust features (surf). Computer Vision and Image Understanding 110, 346–359 (2008)CrossRefGoogle Scholar
  21. 21.
    Boykov, Y., Kolmogorov, V.: An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE TPAMI 26, 1124–1137 (2004)CrossRefGoogle Scholar
  22. 22.
    Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE TPAMI 23, 1222–1239 (2001)CrossRefGoogle Scholar
  23. 23.
    Kolmogorov, V., Zabih, R.: What energy functions can be minimized via graph cuts? IEEE TPAMI 26, 147–159 (2004)CrossRefGoogle Scholar
  24. 24.
    Kwatra, V., Schödl, A., Essa, I.A., Turk, G., Bobick, A.F.: Graphcut textures: image and video synthesis using graph cuts. ACM Trans. Graphics 22, 277–286 (2003)CrossRefGoogle Scholar
  25. 25.
    Barnes, C., Shechtman, E., Finkelstein, A., Goldman, D.B.: PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Trans. Graphics 28, 24:1–24:11 (2009)CrossRefGoogle Scholar
  26. 26.
    Pérez, P., Gangnet, M., Blake, A.: Poisson image editing. ACM Trans. Graphics 22, 313–318 (2003)CrossRefGoogle Scholar
  27. 27.
    Sun, D., Roth, S., Black, M.J.: Secrets of optical flow estimation and their principles. In: Proc. IEEE CVPR, pp. 2432–2439. IEEE (2010)Google Scholar
  28. 28.
    Bai, X., Wang, J., Simons, D., Sapiro, G.: Video snapcut: robust video object cutout using localized classifiers. ACM Trans. Graphics 28 (2009)Google Scholar
  29. 29.
    Yin, P., Criminisi, A., Winn, J.M., Essa, I.A.: Tree-based classifiers for bilayer video segmentation. In: Proc. IEEE CVPR (2007)Google Scholar
  30. 30.
    Zelnik-Manor, L., Irani, M.: Multiview constraints on homographies. IEEE Trans. Pattern Anal. Mach. Intell. 24, 214–223 (2002)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Miguel Granados
    • 1
  • Kwang In Kim
    • 1
  • James Tompkin
    • 1
    • 2
    • 3
  • Jan Kautz
    • 2
  • Christian Theobalt
    • 1
  1. 1.Max-Planck-Institut für InformatikSaarbrückenGermany
  2. 2.University College LondonLondonUK
  3. 3.Intel Visual Computing InstituteSaarbrückenGermany

Personalised recommendations