Abstract

Extracting temporally-coherent alpha mattes in video is an important but challenging problem in post-production. Previous video matting systems are highly sensitive to initial conditions and image noise, thus cannot reliably produce stable alpha mattes without temporal jitter. In this paper we propose an improved video matting system which contains two new components: (1) an accurate trimap propagation mechanism for setting up the initial matting conditions in a temporally-coherent way; and (2) a temporal matte filter which can improve the temporal coherence of the mattes while maintaining the matte structures on individual frames. Experimental results show that compared with previous methods, the two new components lead to alpha mattes with better temporal coherence.

Keywords

Gaussian Mixture Model Temporal Coherence Thin Plate Spline Binary Mask Video Object 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Villegas, P., Marichal, X.: Perceptually-weighted evaluation criteria for segmentation masks in video sequences. IEEE Trans. Image Processing 13, 1092–1103 (2004)CrossRefGoogle Scholar
  2. 2.
    Levin, A., Lischinski, D., Weiss, Y.: A closed-form solution to natural image matting. IEEE Trans. on Pattern Analysis and Machine Intelligence 30, 228–242 (2008)CrossRefGoogle Scholar
  3. 3.
    Gastal, E.S.L., Oliveira, M.M.: Shared sampling for real-time alpha matting. Computer Graphics Forum 29(2), 575–584 (2010); Proceedings of EurographicsCrossRefGoogle Scholar
  4. 4.
    Zheng, Y., Kambhamettu, C.: Learning based digital matting. In: Proc. IEEE International Conference on Computer Vision (2009)Google Scholar
  5. 5.
    Wang, J., Cohen, M.: Image and video matting: A survey. Foundations and Trends in Computer Graphics and Vision 3, 97–175 (2007)CrossRefGoogle Scholar
  6. 6.
    Chuang, Y., Agarwala, A., Curless, B., Salesin, D., Szeliski, R.: Video matting of complex scenes. In: Proceedings of ACM SIGGRAPH, pp. 243–248 (2002)Google Scholar
  7. 7.
    Chuang, Y., Curless, B., Salesin, D.H., Szeliski, R.: A bayesian approach to digital matting. In: Proc. of IEEE CVPR, pp. 264–271 (2001)Google Scholar
  8. 8.
    Li, Y., Sun, J., Shum, H.: Video object cut and paste. In: Proc. ACM SIGGRAPH, pp. 595–600 (2005)Google Scholar
  9. 9.
    Wang, J., Bhat, P., Colburn, R.A., Agrawala, M., Cohen, M.F.: Interactive video cutout. ACM Trans. Graph. 24, 585–594 (2005)CrossRefGoogle Scholar
  10. 10.
    Bai, X., Wang, J., Simons, D., Sapiro, G.: Video snapcut: robust video object cutout using localized classifiers. ACM Trans. Graph. 28, 70:1–70:11 (2009)CrossRefGoogle Scholar
  11. 11.
    Lee, S.Y., Yoon, J.C., Lee, I.K.: Temporally coherent video matting. Graphical Models 72, 25–33 (2010)CrossRefGoogle Scholar
  12. 12.
    Wang, J., Cohen, M.: Optimized color sampling for robust matting. In: Proc. IEEE CVPR (2007)Google Scholar
  13. 13.
    Wahba, G.: Spline models for observational data. In: CBMSNSF Regl. Conf. Ser. Appl. Math., vol. 59 (1990)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Xue Bai
    • 1
  • Jue Wang
    • 1
  • David Simons
    • 1
  1. 1.Adobe SystemsSeattleUSA

Personalised recommendations