Skip to main content

A Novelty Detection Approach for Foreground Region Detection in Videos with Quasi-stationary Backgrounds

  • Conference paper
Advances in Visual Computing (ISVC 2006)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 4291))

Included in the following conference series:

Abstract

Detecting regions of interest in video sequences is one of the most important tasks in many high level video processing applications. In this paper a novel approach based on support vector data description is presented, which detects foreground regions in videos with quasi-stationary backgrounds. The main contribution of this paper is the novelty detection approach which automatically segments video frames into background/foreground regions. By using support vector data description for each pixel, the decision boundary for the background class is modeled without the need to statistically model its probability density function. The proposed method is able to achieve very accurate foreground region detection rates even in very low contrast video sequences, and in the presence of quasi-stationary backgrounds. As opposed to many statistical background modeling approaches, the only critical parameter that needs to be adjusted in our method is the number of background training frames.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Pless, R., Larson, J., Siebers, S., Westover, B.: Evaluation of local models of synamic backgrounds. In: Proceedings of the CVPR, vol. 2, pp. 73–78 (2003)

    Google Scholar 

  2. Elgammal, A., Duraiswami, R., Harwood, D., Davis, L.: Background and foreground modeling using nonparametric kernel density estimation for visual surveillance. Proceedings of the IEEE 90, 1151–1163 (2002)

    Article  Google Scholar 

  3. Pless, R., Brodsky, T., Aloimonos, Y.: Detecting independent motion: The statistics of temporal continuity. IEEE Transactions on PAMI 22, 68–73 (2000)

    Google Scholar 

  4. Wern, C., Azarbayejani, A., Darrel, T., Petland, A.: Pfinder: real-time tracking of human body. IEEE Transactions on PAMI 19, 780–785 (1997)

    Google Scholar 

  5. Toyama, K., Krumm, J., Brumitt, B., Meyers, B.: Wallflower: principels and practive of background maintenance. In: Proceedings of ICCV, vol. 1, pp. 255–261 (1999)

    Google Scholar 

  6. Koller, D., Huang, J.W.T., Malik, J., Ogasawara, G., Rao, B., Russel, S.: Towards robust automatic traffic scene analysis in real-time. In: ICPR, vol. 1, pp. 126–131 (1994)

    Google Scholar 

  7. Stauffer, C., Grimson, W.: Learning patterns of activity using real-time tracking. IEEE Transactions on PAMI 22, 747–757 (2000)

    Google Scholar 

  8. Friedman, N., Russell, S.: Image segmentation in video sequences: A probabilistic approach. In: Annual Conference on Uncertainty in Artificial Intelligence, pp. 175–181 (1997)

    Google Scholar 

  9. McKenna, S., Raja, Y., Gong, S.: Object tracking using adaptive color mixture models. In: Proceedings of Asian Conferenc on Computer Vision, vol. 1, pp. 615–622 (1998)

    Google Scholar 

  10. Lee, D.S.: Effective gaussian mixture learning for video background subtraction. IEEE Transactions on PAMI 27, 827–832 (2005)

    Google Scholar 

  11. Mittal, A., Paragios, N.: Motion-based background subtraction using adaptive kernel density estimation. In: Proceedings of CVPR, vol. 2, pp. 302–309 (2004)

    Google Scholar 

  12. Kim, K., Harwood, D., Davis, L.S.: Background updating for visual surveillance. In: Proceedings of the International Symposium on Visual Computing, vol. 1, pp. 337–346 (2005)

    Google Scholar 

  13. Tavakkoli, A., Nicolescu, M., Bebis, G.: Automatic robust background modeling using multivariate non-parametric kernel density estimation for visual surveillance. In: Bebis, G., Boyle, R., Koracin, D., Parvin, B. (eds.) ISVC 2005. LNCS, vol. 3804, pp. 363–370. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  14. Tax, D.M.J., Duin, R.P.: Support vector data description. Machine Learning 54, 45–66 (2004)

    Article  MATH  Google Scholar 

  15. Tax, D.: Ddtools, the data description toolbox for matlab, version 1.11 (2005)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Tavakkoli, A., Nicolescu, M., Bebis, G. (2006). A Novelty Detection Approach for Foreground Region Detection in Videos with Quasi-stationary Backgrounds. In: Bebis, G., et al. Advances in Visual Computing. ISVC 2006. Lecture Notes in Computer Science, vol 4291. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11919476_5

Download citation

  • DOI: https://doi.org/10.1007/11919476_5

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-48628-2

  • Online ISBN: 978-3-540-48631-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics