Machine Vision and Applications

, Volume 20, Issue 5, pp 271–281 | Cite as

Detection of object abandonment using temporal logic

  • Medha Bhargava
  • Chia-Chih Chen
  • M. S. Ryoo
  • J. K. Aggarwal
Original Paper

Abstract

This paper describes a novel framework for a smart threat detection system that uses computer vision to capture, exploit and interpret the temporal flow of events related to the abandonment of an object. Our approach uses contextual information along with an analysis of the causal progression of events to decide whether or not an alarm should be raised. When an unattended object is detected, the system traces it back in time to determine and record who its most likely owner(s) may be. Through subsequent frames, the system searches the scene for the owner and issues an alert if no match is found for the owner over a given period of time. Our algorithm has been successfully tested on two benchmark datasets (PETS 2006 Benchmark Data, 2006; i-LIDS Dataset for AVSS, 2007), and yielded results that are substantially more accurate than similar systems developed by other academic and industrial research groups.

Keywords

Abandoned objects Threat detection Temporal logic Public areas 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Allen J., Ferguson G.: Actions and events in interval temporal logic. J. Logic Comput. 4(5), 531–579 (1994)MATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    Auvinet, E., Grossmann, E., Rougier, C., Dahmane, M., Meunier, J.: Left-luggage detection using homographies and simple heuristics. In: Proceedings of IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (PETS), New York, pp. 51–58 (2006)Google Scholar
  3. 3.
    Bhargava, M., Chen, C.-C., Ryoo, M.S., Aggarwal, J.K.: Detection of abandoned objects in crowded environments. In: Proceedings of 2007 IEEE International Conference on Advanced Video and Signal based Surveillance (AVSS), London (2007)Google Scholar
  4. 4.
    Chen, C.-C., Aggarwal, J.K.: An adaptive background model initialization algorithm with objects moving at different depths. In: IEEE International Conference on Image Processing (ICIP), San Diego (2008)Google Scholar
  5. 5.
    Grabner, H., Roth, P., Grabner, M.: Autonomous learning of a robust background model for change detection. In: Proceedings of IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (PETS), New York, pp. 39–54 (2006)Google Scholar
  6. 6.
    Gutchess, D., Trajkovic, M., Kohen-Solal, E., Lyons, D., Jain, A.K.: A Background model initialization algorithm for video surveillance. In: Proceedings of IEEE International Conference on Computer Vision (ICCV), pp. 733–740 (2001)Google Scholar
  7. 7.
    i-LIDS Bag and Vehicle Detection Challenge in Association with AVSS (2007)Google Scholar
  8. 8.
    i-LIDS Dataset for AVSS (2007)Google Scholar
  9. 9.
    Lewis, J.P.: Fast normalized cross-correlation. In: Industrial Light and Magic, pp. 1–7 (1995)Google Scholar
  10. 10.
    Li, L., Luo, R., Huang, W., Eng, H.: Context-controlled adaptive background subtraction. In: Proceedings of IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (PETS), New York, pp. 31–38 (2006)Google Scholar
  11. 11.
    Lv, F., Song, X., Wu, B., Singh, V.K., Nevatia, R.: Left-luggage detection using Bayesian inference. In: Proceedings of IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (PETS), New York, pp. 83–90 (2006)Google Scholar
  12. 12.
    Martinez-del-Rincon, J., Herrero-Jaraba, J., Gomez, J., Orrite-Urunuela, C.: Automatic left luggage detection and tracking using multi-camera UKF. In: Proceedings of IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (PETS), New York, pp. 59–65 (2006)Google Scholar
  13. 13.
    Nevatia, R., Zhao, T., Hongeng, S.: Hierarchical language-based representation of events in video streams. In: Proceedings of IEEE Workshop on Event Mining (2003)Google Scholar
  14. 14.
    PETS 2006 Benchmark Data (2006)Google Scholar
  15. 15.
    Porikli, F.: Detection of temporal static regions by processing video at different frame rates. In: Proceedings of IEEE International Conference on Advanced Video and Signal based Surveillance (AVSS), London (2007)Google Scholar
  16. 16.
    Ryoo, M.S., Aggarwal, J.K.: Recognition of composite human activities through context-free grammar based representation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), New York, pp. 1709–1718 (2006)Google Scholar
  17. 17.
    Smith, J., Chang, S.-F.: VisualSEEk: a fully automated content-based image query system. In: Proceedings of ACM International Conference on Multimedia, Boston (1996)Google Scholar
  18. 18.
    Smith, K., Quelhas, P., Gatica-Perez, D.: Detecting abandoned luggage items in a public space. In: Proceedings of IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (PETS), New York, pp. 75–82 (2006)Google Scholar
  19. 19.
    Stauffer C., Grimson W.E.L.: Learning patterns of activity using real-time tracking. IEEE Trans. Pattern Anal. Mach. Intell. (PAMI) 22(8), 747–757 (2000)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag 2009

Authors and Affiliations

  • Medha Bhargava
    • 1
    • 2
  • Chia-Chih Chen
    • 1
  • M. S. Ryoo
    • 1
  • J. K. Aggarwal
    • 1
  1. 1.Department of Electrical and Computer Engineering, Computer and Vision Research CenterThe University of Texas at AustinAustinUSA
  2. 2.CGGVeritasBrentfordUK

Personalised recommendations