Skip to main content

Learning Contextual Variations for Video Segmentation

  • Conference paper
  • 2618 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5008))

Abstract

This paper deals with video segmentation in vision systems. We focus on the maintenance of background models in long-term videos of changing environment which is still a real challenge in video surveillance. We propose an original weakly supervised method for learning contextual variations in videos. Our approach uses a clustering algorithm to automatically identify different contexts based on image content analysis. Then, state-of-the-art video segmentation algorithms (e.g. codebook, MoG) are trained on each cluster. The goal is to achieve a dynamic selection of background models. We have experimented our approach on a long video sequence (24 hours). The presented results show the segmentation improvement of our approach compared to codebook and MoG.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Prati, A., Mikic, I., Trivedi, M., Cucchiara, R.: Detecting moving shadows: algorithms and evaluation. IEEE Transactions on Pattern Analysis and Machine Intelligence 25(7), 918–923 (2003)

    Article  Google Scholar 

  2. Stauffer, C., Grimson, W.: Adaptive background mixture models for real-time tracking. In: Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, pp. 246–252 (1999)

    Google Scholar 

  3. Elgammal, A.M., Harwood, D., Davis, L.S.: Non-parametric model for background subtraction. In: Vernon, D. (ed.) ECCV 2000. LNCS, vol. 1843, pp. 751–767. Springer, Heidelberg (2000)

    Chapter  Google Scholar 

  4. Kim, K., Chalidabhongse, T.H., Harwood, D., Davis, L.: Real-time foreground-background segmentation using codebook model. Real-Time Imaging 11(3), 172–185 (2005)

    Article  Google Scholar 

  5. Georis, B., Bremond, F., Thonnat, M.: Real-time control of video surveillance systems with program supervision techniques. Machine Vision and Applications 18(3-4), 189–205 (2007)

    Article  MATH  Google Scholar 

  6. Pass, G., Zabih, R., Miller, J.: Comparing images using color coherence vectors. In: ACM International Conference on Multimedia, pp. 65–73. ACM Press, New York, USA (1997)

    Google Scholar 

  7. Ester, M., Kriegel, H.P., Sander, J., Xu, X.: A density-based algorithm for discovering clusters in large spatial databases with noise. In: Proc. 2nd Int. Conf. on Knowledge Discovery and Data Mining, Portland, pp. 226–231 (1996)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Antonios Gasteratos Markus Vincze John K. Tsotsos

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Martin, V., Thonnat, M. (2008). Learning Contextual Variations for Video Segmentation. In: Gasteratos, A., Vincze, M., Tsotsos, J.K. (eds) Computer Vision Systems. ICVS 2008. Lecture Notes in Computer Science, vol 5008. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-79547-6_45

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-79547-6_45

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-79546-9

  • Online ISBN: 978-3-540-79547-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics