Advertisement

Interactive Segmentation of High-Resolution Video Content Using Temporally Coherent Superpixels and Graph Cut

  • Matthias Reso
  • Björn Scheuermann
  • Jörn Jachalsky
  • Bodo Rosenhahn
  • Jörn Ostermann
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8887)

Abstract

Interactive video segmentation has become a popular topic in computer vision and computer graphics. Discrete optimization using maximum flow algorithms is one of the preferred techniques to perform interactive video segmentation. This paper extends pixel based graph cut approaches to overcome the problem of high memory requirements. The basic idea is to use a graph cut optimization framework on top of temporally coherent superpixels. While grouping spatially coherent pixels sharing similar color, these algorithms additionally exploit the temporal connections between those image regions. Thereby the number of variables in the optimization framework is severely reduced. The effectiveness of the proposed algorithm is shown quantitatively, qualitatively and through timing comparisons of different temporally coherent superpixel approaches. Experiments on video sequences show that temporally coherent superpixels lead to significant speed-up and reduced memory consumption. Thus, video sequences can be interactively segmented in a more efficient manner while producing better segmentation quality when compared to other approaches.

Keywords

Video Sequence Segmentation Result Interactive Video Segmentation Error Segmentation Quality 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Wang, S., Lu, H., Yang, F., Yang, M.H.: Superpixel tracking. In: ICCV, pp. 1323–1330 (2011)Google Scholar
  2. 2.
    Wang, J., Xu, Y., Shum, H.Y., Cohen, M.F.: Video tooning. In: SIGGRAPH, pp. 572–581 (2004)Google Scholar
  3. 3.
    van den Hengel, A., Dick, A., Thormählen, T., Ward, B., Torr, P.H.: Videotrace: rapid interactive scene modelling from video. In: SIGGRAPH, vol. 26 (2007)Google Scholar
  4. 4.
    Reso, M., Jachalsky, J., Rosenhahn, B., Ostermann, J.: Temporally consistent superpixels. In: ICCV, pp. 385–392 (2013)Google Scholar
  5. 5.
    Boykov, Y., Jolly, M.P.: Interactive graph cuts for optimal boundary & region segmentation of objects in nd images. In: ICCV, vol. 1, pp. 105–112 (2001)Google Scholar
  6. 6.
    Rother, C., Kolmogorov, V., Blake, A.: Grabcut: Interactive foreground extraction using iterated graph cuts. In: SIGGRAPH, vol. 23, pp. 309–314 (2004)Google Scholar
  7. 7.
    Delong, A., Boykov, Y.: A scalable graph-cut algorithm for nd grids. In: CVPR, pp. 1–8 (2008)Google Scholar
  8. 8.
    Scheuermann, B., Schlosser, M., Rosenhahn, B.: Efficient pixel-grouping based on dempster’s theory of evidence for image segmentation. In: Lee, K.M., Matsushita, Y., Rehg, J.M., Hu, Z. (eds.) ACCV 2012, Part I. LNCS, vol. 7724, pp. 745–759. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  9. 9.
    Ren, X., Malik, J.: Learning a classification model for segmentation. In: ICCV, pp. 10–17 (2003)Google Scholar
  10. 10.
    Galasso, F., Cipolla, R., Schiele, B.: Video segmentation with superpixels. In: ACCV 2013, pp. 760–774 (2013)Google Scholar
  11. 11.
    Vazquez-Reina, A., Avidan, S., Pfister, H., Miller, E.: Multiple hypothesis video segmentation from superpixel flows. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part V. LNCS, vol. 6315, pp. 268–281. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  12. 12.
    Grundmann, M., Kwatra, V., Han, M., Essa, I.: Efficient hierarchical graph-based video segmentation. In: CVPR, pp. 2141–2148 (2010)Google Scholar
  13. 13.
    Chang, J., Wei, D., Fisher III, J.W.: A video representation using temporal superpixels. In: CVPR, pp. 2051–2058 (2013)Google Scholar
  14. 14.
    Bergh, M.V.D., Roig, G., Boix, X., Manen, S., Gool, L.V.: Online video seeds for temporal window objectness. In: ICCV, pp. 377–384 (2013)Google Scholar
  15. 15.
    Xu, C., Whitt, S., Corso, J.J.: Flattening supervoxel hierarchies by the uniform entropy slice. In: ICCV, pp. 2240–2247 (2013)Google Scholar
  16. 16.
    Wang, J., Bhat, P., Colburn, R.A., Agrawala, M., Cohen, M.F.: Interactive video cutout. SIGGRAPH 24, 585–594 (2005)CrossRefGoogle Scholar
  17. 17.
    Price, B.L., Morse, B.S., Cohen, S.: Livecut: Learning-based interactive video segmentation by evaluation of multiple propagated cues. In: ICCV, pp. 779–786 (2009)Google Scholar
  18. 18.
    Bai, X., Wang, J., Simons, D., Sapiro, G.: Video snapcut: robust video object cutout using localized classifiers. SIGGRAPH 28 (2009)Google Scholar
  19. 19.
    Dondera, R., Morariu, V., Wang, Y., Davis, L.: Interactive video segmentation using occlusion boundaries and temporally coherent superpixels. In: WACV, pp. 784–791 (2014)Google Scholar
  20. 20.
    Levinshtein, A., Sminchisescu, C., Dickinson, S.: Spatiotemporal closure. In: Kimmel, R., Klette, R., Sugimoto, A. (eds.) ACCV 2010, Part I. LNCS, vol. 6492, pp. 369–382. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  21. 21.
    Lermé, N., Malgouyres, F., Létocart, L.: Reducing graphs in graph cut segmentation. In: ICIP, pp. 3045–3048 (2010)Google Scholar
  22. 22.
    Scheuermann, B., Rosenhahn, B.: SlimCuts: GraphCuts for high resolution images using graph reduction. In: Boykov, Y., Kahl, F., Lempitsky, V., Schmidt, F.R. (eds.) EMMCVPR 2011. LNCS, vol. 6819, pp. 219–232. Springer, Heidelberg (2011)Google Scholar
  23. 23.
    Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Susstrunk, S.: Slic superpixels compared to state-of-the-art superpixel methods. TPAMI 34, 2274–2282 (2012)CrossRefGoogle Scholar
  24. 24.
    Reso, M., Jachalsky, J., Rosenhahn, B., Ostermann, J.: Superpixels for video content using a contour-based em optimization. In: ACCV (2014)Google Scholar
  25. 25.
    Boykov, Y., Kolmogorov, V.: An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. TPAMI 26, 1124–1137 (2004)CrossRefGoogle Scholar
  26. 26.
    Sundberg, P., Brox, T., Maire, M., Arbelaez, P., Malik, J.: Occlusion boundary detection and figure/ground assignment from optical flow. In: CVPR (2011)Google Scholar
  27. 27.
    Galasso, F., Nagaraja, N.S., Cárdenas, T.J., Brox, T., Schiele, B.: A unified video segmentation benchmark: Annotation, metrics and analysis. In: ICCV, pp. 3527–3534 (2013)Google Scholar
  28. 28.
    Comaniciu, D., Meer, P.: Mean shift: A robust approach toward feature space analysis. TPAMI 24, 603–619 (2002)CrossRefGoogle Scholar
  29. 29.
    Blake, A., Rother, C., Brown, M., Perez, P., Torr, P.: Interactive image segmentation using an adaptive GMMRF model. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3021, pp. 428–441. Springer, Heidelberg (2004)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Matthias Reso
    • 1
  • Björn Scheuermann
    • 1
  • Jörn Jachalsky
    • 2
  • Bodo Rosenhahn
    • 1
  • Jörn Ostermann
    • 1
  1. 1.Leibniz Universität HannoverGermany
  2. 2.Technicolor Research & Innovation HannoverGermany

Personalised recommendations