Advertisement

Machine Vision and Applications

, Volume 25, Issue 1, pp 133–143 | Cite as

Hierarchical abnormal event detection by real time and semi-real time multi-tasking video surveillance system

  • Sung Chun Lee
  • Ram Nevatia
Special Issue Paper

Abstract

In this paper, we describe how to detect abnormal human activities taking place in an outdoor surveillance environment. Human tracks are provided in real time by the baseline video surveillance system. Given trajectory information, the event analysis module will attempt to determine whether or not a suspicious activity is currently being observed. However, due to real-time processing constrains, there might be false alarms generated by video image noise or non-human objects. It requires further intensive examination to filter out false event detections which can be processed in an off-line fashion. We propose a hierarchical abnormal event detection system that takes care of real time and semi-real time as multi-tasking. In low level task, a trajectory-based method processes trajectory data and detects abnormal events in real time. In high level task, an intensive video analysis algorithm checks whether the detected abnormal event is triggered by actual humans or not.

Keywords

Video surveillance system Real-time abnormal event detection Human trajectory analysis 

References

  1. 1.
    Felzenszwalb, P.F., Huttenlocher, D.P.: Pictorial structures for object recognition. Int. J. Comput. Vis. 61(1), 55–79 (2005). doi: 10.1023/B:VISI.0000042934.15159.49 Google Scholar
  2. 2.
    Gupta, A., Davis, L.: Objects in action: An approach for combining action understanding and object perception. In: IEEE Conference on Computer Vision and Pattern Recognition (2007)Google Scholar
  3. 3.
    Huang, C., Nevatia, R.: High performance object detection by collaborative learning of joint ranking of granules features. In: IEEE Conference on Computer Vision and, Pattern Recognition, pp. 41–48 (2010)Google Scholar
  4. 4.
    Junejo, I., Dexter, E., Laptev, I., Perez, P.: Cross-view action recognition from temporal self-similarities. In: European Conference on Computer Vision (2008)Google Scholar
  5. 5.
    Krahnstoever, N., Kelliher, T., Rittscher, J.: Obtaining pareto optimal performance of visual surveillance algorithms. In: IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (PETS) (2005)Google Scholar
  6. 6.
    Laptev, I., Lindeberg, T.: Space-time interest points. In: International Conference on Computer Vision (2003)Google Scholar
  7. 7.
    Laptev, I., Marszalek, M., Schmid, C., Rozenfeld, B.: Learning realistic human actions from movies. In: IEEE Conference on Computer Vision and Pattern Recognition (2008)Google Scholar
  8. 8.
    Li, L., Huang, W., Gu, I.Y., Tian, Q.: Foreground object detection from videos containing complex background. In: Proceedings of the eleventh ACM international conference on Multimedia, pp. 2–10 (2003)Google Scholar
  9. 9.
    Niebles, J., Fei-Fei, L.: A hierarchical model of shape and appearance for human action classification. IEEE Conference on Computer Vision and, Pattern Recognition (2007)Google Scholar
  10. 10.
    Ramanan, D., Forsyth, D.A., Zisserman, A.: Tracking people by learning their appearance. IEEE Trans. Patt. Anal. Mach. Intell. 29(1), 65–81 (2007). doi: 10.1109/TPAMI.2007.22 CrossRefGoogle Scholar
  11. 11.
    Rittscher, L.G.J., Krahnstoever, N.: Multi-target tracking using hybrid particle filtering. In: IEEE Workshop on Applications of Computer Vision (WACV) (2005)Google Scholar
  12. 12.
    Singh, V.K., Nevatia, R., Huang, C.: Efficient inference with multiple heterogeneous part detectors for human pose estimation. In: Proceedings of the 11th European conference on computer vision conference on Computer vision: Part III, ECCV’10, pp. 314–327. Springer, Berlin, Heidelberg (2010)Google Scholar
  13. 13.
    Tu, P., Rittscher, J., Kelliher, T.: Site calibration for large indoor scenes. In: IEEE International Conference on Advanced Video and Signal-based Surveillance (AVSS), pp. 358–363 (2003)Google Scholar
  14. 14.
    Wu, B., Nevatia, R.: Detection and tracking of multiple, partially occluded humans by bayesian combination of edgelet based part detectors. Int. J. Comput. Vis. 75(2), 247–266 (2007)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  1. 1.Institute for Robotics and Intelligent SystemsUniversity of Southern CaliforniaLos AngelesUSA

Personalised recommendations