Autonomous Surveillance Tolerant to Interference

  • Nadeesha Oliver Ranasinghe
  • Wei-Min Shen
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7429)

Abstract

Autonomous recognition of human activities from video streams is an important aspect of surveillance. A key challenge is to learn an appropriate representation or model of each activity. This paper presents a novel solution for recognizing a set of predefined actions in video streams of variable durations, even in the presence of interference, such as noise and gaps caused by occlusions or intermittent data loss. The most significant contribution of this solution is learning the number of states required to represent an action, in a short period of time, without exhaustive testing of all state spaces. It works by using Surprise-Based Learning (SBL) to reason on data (object tracks) provided by a vision module. SBL autonomously learns a set of rules which capture the essential information required to disambiguate each action. These rules are then grouped together to form states and a corresponding Markov chain which can detect actions with varying time duration. Several experiments on the publicly available visint.org video corpora have yielded favorable results.

Keywords

Machine Learning Development Learning Predictive Modeling Recognition Gap Filling Temporal and Sequential Learning 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ranasinghe, N., Shen, W.-M.: Autonomous Adaptation to Simultaneous Unexpected Changes in Modular Robots. In: Workshop on Recofingurable Modular Robotics at the International Conference on Intelligent Robots and Systems (October 2011)Google Scholar
  2. 2.
    Ranasinghe, N., Shen, W.-M.: Surprise-Based Developmental Learning and Experimental Results on Robots. In: International Conference on Development and Learning (June 2009)Google Scholar
  3. 3.
    Singh, V.K., Wu, B., Nevatia, R.: Pedestrian Tracking by Associating Tracklets using Detection Residuals. In: IEEE Motion and Video Computing (2008)Google Scholar
  4. 4.
    Felzenszwalb, P., Huttenlocher, D.: Efficient Graph-Based Image Segmentation. International Journal of Computer Vision (2004)Google Scholar
  5. 5.
    Bodor, R., Jackson, B., Papanikolopoulos, N.: Vision-based Human Tracking and Activity Recognition. In: Mediterranean Conference on Control and Automation (2003)Google Scholar
  6. 6.
    Qian, H., Mao, Y., Wang, H., Wang, Z.: On Video-based Human Action Classification by SVM Decision Tree. Intelligent Control and Automation, 385–390 (2010)Google Scholar
  7. 7.
    Diettrich, T.G.: Machine Learning for Sequential Data: A Review, Structural, Syntactic, and Statistical Pattern Recognition, pp. 15–30. Springer (2002)Google Scholar
  8. 8.
    Brand, M., Oliver, N., Pentland, A.: Coupled Hidden Markov Models for Complex Action Recognition. Computer Vision and Pattern Recognition, 994–999 (1997)Google Scholar
  9. 9.
    Weinland, D., Boyer, E., Ronfard, R.: Action Recognition from Arbitrary Views using 3D Exemplars. In: International Conference on Computer Vision, pp. 1–7 (2007)Google Scholar
  10. 10.
    Natarajan, P., Nevatia, R.: View and Scale Invariant Action Recognition Using Multiview Shape-Flow Models. In: International Conference on Computer Vision and Patten Recognition (2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Nadeesha Oliver Ranasinghe
    • 1
  • Wei-Min Shen
    • 1
  1. 1.Information Sciences InstituteUniversity of Southern CaliforniaUSA

Personalised recommendations