Efficient Incorporation of Motionless Foreground Objects for Adaptive Background Segmentation

  • I. Huerta
  • D. Rowe
  • J. Gonzàlez
  • J. J. Villanueva
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4069)


In this paper, we want to exploit the knowledge obtained from those detected objects which are incorporated into the background model since they cease their movement. These motionless foreground objects should be handled in security domains such as video surveillance. This paper uses an adaptive background modelling algorithm for moving-object detection. Those detected objects which present no motion are identified and added into the background model, so that they will be part of the new background. Such motionless agents are included for further appearance analysis and agent categorization.


Background Model Foreground Object Foreground Pixel Security Domain Background Scene 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Elgammal, A., Harwood, D., Davis, L.: Non-parametric model for background subtraction. In: Vernon, D. (ed.) ECCV 2000. LNCS, vol. 1843, pp. 751–767. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  2. 2.
    Grimson, W.E.L., Stauffer, C.: Adaptive background mixture models for real-time tracking, vol. 1, pp. 22–29 (1999)Google Scholar
  3. 3.
    Grimson, W.E.L., Stauffer, C., Romano, R.: Using adaptive tracking to classify and monitor activities in a site, pp. 22–29 (1998)Google Scholar
  4. 4.
    Haritaoglu, I., Harwood, D., Davis, L.S.: W4: Real-time surveillance of people and their activities. IEEE Trans. Pattern Analysis and Machine Intelligence 22(8), 809–830 (2000)CrossRefGoogle Scholar
  5. 5.
    Horprasert, T., Harwood, D., Davis, L.S.: A statistical approach for real-time robust background subtraction and shadow detection. In: IEEE Frame-Rate Applications Workshop (1999)Google Scholar
  6. 6.
    Jabri, H.W.S., Duric, Z., Rosenfeld, A.: Detection and location of people in video images using adaptive fusion of color and edge information, September 2000, vol. 4, pp. 627–630 (2000)Google Scholar
  7. 7.
    i Sabaté, J.G.: Human Sequence Evaluation: the Key-frame Approach. PhD thesis (May 2004)Google Scholar
  8. 8.
    Karaman, M., Goldmann, L., Yu, D., Sikora, T.: Comparison of static background segmentation methods. In: Visual Communications and Image Processing (VCIP 2005) (July 2005)Google Scholar
  9. 9.
    Toyama, K., Krumm, J., Brumitt, B., Meyers, B.: Wallflower: Principles and practice of background maintenance, vol. 1, pp. 255–261 (1999)Google Scholar
  10. 10.
    Nagel, H.-H.: Steps toward a cognitive vision system. AI Magazine, Cognitive Vision 25(2), 31–50 (2004)MathSciNetGoogle Scholar
  11. 11.
    Shen, J.: Motion detection in color image sequence and shadow elimination. Visual Communications and Image Processing 5308, 731–740 (2004)Google Scholar
  12. 12.
    Wang, L., Hu, W., Tan, T.: Recent developments in human motion analysis. Pattern Recognition 36(3), 585–601 (2003)CrossRefGoogle Scholar
  13. 13.
    Wren, C.R., Azarbayejani, A., Darrell, T., Pentland, A.P.: Pfinder: Real-time tracking of the human body. IEEE Trans. Pattern Analysis and Machine Intelligence 19(7), 780–785 (1997)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • I. Huerta
    • 1
  • D. Rowe
    • 1
  • J. Gonzàlez
    • 2
  • J. J. Villanueva
    • 1
  1. 1.Computer Vision Centre & Dept. d’Informàtica.BellaterraSpain
  2. 2.Institut de Robòtica i Informàtica Ind. UPCBarcelonaSpain

Personalised recommendations