Video Segmentation Framework by Dynamic Background Modelling

  • Santiago Molina-Giraldo
  • Andres M. Álvarez-Meza
  • Julio C. García-Álvarez
  • Cesar G. Castellanos-Domínguez
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8156)

Abstract

Detecting moving objects in video streams is the first relevant step of information extraction in many computer vision applications, e.g. video surveillance systems. In this work, a video segmentation framework by dynamic background modelling is presented. Our approach aims to update suitably the background model of a scene that is recorded by a static camera. For such purpose, we develop an optical flow based methodology to suitable track moving objects, which can stop or change smoothly their movement along the video. Moreover, a light variations identification stage, is employed to avoid possible confusions between illumination changes and objects in movement. Regarding this, our approach is able to ensure a suitable background modelling in real world scenarios. Attained results show that our framework outperforms, in well-known datasets, state of the art methodologies.

Keywords

background subtraction optical flow tracking 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Barron, J.L., Fleet, D.J., Beauchemin, S.: Performance of optical flow techniques. International Journal of Computer Vision 12(1), 43–77 (1994)CrossRefGoogle Scholar
  2. 2.
    Chen, T.W., Hsu, S.C., Chien, S.Y.: Robust video object segmentation based on k-means background clustering and watershed in ill-conditioned surveillance systems. In: 2007 IEEE International Conference on Multimedia and Expo, pp. 787–790 (July 2007)Google Scholar
  3. 3.
    Collins, R.T., Lipton, A., Kanade, T., Fujiyoshi, H., Duggins, D., Tsin, Y., Tolliver, D., Enomoto, N., Hasegawa, O., Burt, P., et al.: A system for video surveillance and monitoring, vol. 102. Carnegie Mellon University, the Robotics Institute Pittsburg (2000)Google Scholar
  4. 4.
    Correia, M.V., Campilho, A.C.: Real-time implementation of an optical flow algorithm. In: Proceedings of the Pattern Recognition, vol. 4, pp. 247–250. IEEE (2002)Google Scholar
  5. 5.
    Elgammal, A., Duraiswami, R., Harwood, D., Davis, L.: Background and foreground modeling using nonparametric kernel density estimation for visual surveillance. Proceedings of the IEEE 90(7), 1151–1163 (2002)CrossRefGoogle Scholar
  6. 6.
    Molina-Giraldo, S., Carvajal-Gonzales, J., Álvarez-Meza, A., Castellanos-Domínguez, G.: Video segmentation based on multi-kernel learning and feature relevance analysis for object classification. In: ICPRAM (2013)Google Scholar
  7. 7.
    Klare, B., Sarkar, S.: Background subtraction in varying illuminations using an ensemble based on an enlarged feature set. In: IEEE Computer Society Conference on CVPR Workshops 2009, pp. 66–73 (June 2009)Google Scholar
  8. 8.
    Maddalena, L., Petrosino, A.: A self-organizing approach to background subtraction for visual surveillance applications. IEEE Transactions on Image Processing 17(7), 1168–1177 (2008)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Raty, T.: Survey on contemporary remote surveillance systems for public safety. IEEE Transactions on Systems, Man, and Cybernetics 40(5), 493–515 (2010)CrossRefGoogle Scholar
  10. 10.
    Zhou, D., Zhang, H.: Modified gmm background modeling and optical flow for detection of moving objects. In: 2005 IEEE International Conference on Systems, Man and Cybernetics, vol. 3, pp. 2224–2229. IEEE (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Santiago Molina-Giraldo
    • 1
  • Andres M. Álvarez-Meza
    • 1
  • Julio C. García-Álvarez
    • 1
  • Cesar G. Castellanos-Domínguez
    • 1
  1. 1.Signal Processing and Recognition GroupUniversidad Nacional de Colombia - sede ManizalesManizalesColombia

Personalised recommendations