Advertisement

Generalized Background Subtraction Using Superpixels with Label Integrated Motion Estimation

  • Jongwoo Lim
  • Bohyung Han
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8693)

Abstract

We propose an online background subtraction algorithm with superpixel-based density estimation for videos captured by moving camera. Our algorithm maintains appearance and motion models of foreground and background for each superpixel, computes foreground and background likelihoods for each pixel based on the models, and determines pixelwise labels using binary belief propagation. The estimated labels trigger the update of appearance and motion models, and the above steps are performed iteratively in each frame. After convergence, appearance models are propagated through a sequential Bayesian filtering, where predictions rely on motion fields of both labels whose computation exploits the segmentation mask. Superpixel-based modeling and label integrated motion estimation make propagated appearance models more accurate compared to existing methods since the models are constructed on visually coherent regions and the quality of estimated motion is improved by avoiding motion smoothing across regions with different labels. We evaluate our algorithm with challenging video sequences and present significant performance improvement over the state-of-the-art techniques quantitatively and qualitatively.

Keywords

generalized background subtraction superpixel segmentation density propagation layered optical flow estimation 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Elgammal, A., Harwood, D., Davis, L.: Non-parametric model for background subtraction. In: Vernon, D. (ed.) ECCV 2000. LNCS, vol. 1843, pp. 751–767. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  2. 2.
    Stauffer, C., Grimson, W.: Adaptive background mixture models for real-time tracking. In: CVPR, pp. 246–252 (1999)Google Scholar
  3. 3.
    Lee, D.: Effective gaussian mixture learning for video background subtraction. IEEE TPAMI 27, 827–832 (2005)CrossRefGoogle Scholar
  4. 4.
    Sheikh, Y., Shah, M.: Bayesian object detection in dynamic scenes. In: CVPR (2005)Google Scholar
  5. 5.
    Wren, C., Azarbayejani, A., Darrell, T., Pentland, A.: Pfinder: Real-time tracking of the human body. IEEE TPAMI 19, 780–785 (1997)CrossRefGoogle Scholar
  6. 6.
    Han, B., Davis, L.: Density-based multi-feature background subtraction with support vector machine. IEEE TPAMI 34, 1017–1023 (2012)CrossRefGoogle Scholar
  7. 7.
    Hayman, E., Eklundh, J.O.: Statistical background subtraction for a mobile observer. In: ICCV (2003)Google Scholar
  8. 8.
    Mittal, A., Huttenlocher, D.: Scene modeling for wide area surveillance and image synthesis. In: CVPR (2000)Google Scholar
  9. 9.
    Yuan, C., Medioni, G., Kang, J., Cohen, I.: Detecting motion regions in the presence of a string parallax from a moving camera by multiview geometric constraints. IEEE TPAMI 20 (2007)Google Scholar
  10. 10.
    Sheikh, Y., Javed, O., Kanade, T.: Background subtraction for freely moving cameras. In: ICCV (2009)Google Scholar
  11. 11.
    Elqursh, A., Elgammal, A.: Online moving camera background subtraction. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part VI. LNCS, vol. 7577, pp. 228–241. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  12. 12.
    Sand, P., Teller, S.: Particle video: Long-range motion estimation using point trajectories. In: CVPR, pp. 2195–2202 (2006)Google Scholar
  13. 13.
    Kwak, S., Lim, T., Nam, W., Han, B., Han, J.H.: Generalized background subtraction based on hybrid inference by belief propagation and bayesian filtering. In: ICCV, pp. 2174–2181 (2011)Google Scholar
  14. 14.
    Lim, T., Han, B., Han, J.H.: Modeling and segmentation of floating foreground and background in videos. Pattern Recognition 45, 1696–1706 (2012)CrossRefzbMATHGoogle Scholar
  15. 15.
    Cui, X., Huang, J., Zhang, S., Metaxas, D.N.: Background subtraction using low rank and group sparsity constraints. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part I. LNCS, vol. 7572, pp. 612–625. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  16. 16.
    Ochs, P., Brox, T.: Object segmentation in video: A hierarchical variational approach for turning point trajectories into dense regions. In: ICCV, pp. 1583–1590 (2011)Google Scholar
  17. 17.
    Liu, C., Freeman, W., Adelson, E., Weiss, Y.: Human-assistied motion annotation. In: CVPR (2008)Google Scholar
  18. 18.
    Liu, M.Y., Tuzel, O., Ramalingam, S., Chellappa, R.: Entropy rate superpixel segmentation. In: CVPR, pp. 2097–2104 (2011)Google Scholar
  19. 19.
    Liu, C.: Beyond pixels: exploring new representations and applications for motion analysis. PhD thesis, Massachusetts Institute of Technology (2009)Google Scholar
  20. 20.
    Tron, R., Vidal, R.: A benchmark for the comparison of 3-d motion segmentation algorithms. In: CVPR (2007)Google Scholar
  21. 21.
    Ess, A., Leibe, B., Gool, L.V.: Depth and appearance for mobile scene analysis. In: ICCV 2007 (2007)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Jongwoo Lim
    • 1
  • Bohyung Han
    • 2
  1. 1.Division of Computer Science and EngineeringHanyang UniversitySeoulKorea
  2. 2.Department of Computer Science and EngineeringPOSTECHKorea

Personalised recommendations