Mixed-State Models for Nonstationary Multiobject Activities
- 709 Downloads
We present a mixed-state space approach for modeling and segmenting human activities. The discrete-valued component of the mixed state represents higher-level behavior while the continuous state models the dynamics within behavioral segments. A basis of behaviors based on generic properties of motion trajectories is chosen to characterize segments of activities. A Viterbi-based algorithm to detect boundaries between segments is described. The usefulness of the proposed approach for temporal segmentation and anomaly detection is illustrated using the TSA airport tarmac surveillance dataset, the bank monitoring dataset, and the UCF database of human actions.
KeywordsInformation Technology Human Action State Model Quantum Information Generic Property
- 2.Vaswani N, Chowdhury AR, Chellappa R: Activity recognition using the dynamics of the configuration of interacting objects. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '03), June 2003, Madison, Wis, USA 2: 633–640.Google Scholar
- 9.Izo T, Grimson WEL: Simultaneous pose estimation and camera calibration from multiple views. Proceedings of IEEE Workshop on Motion of Non-Rigid and Articulated Objects, June 2004, Washington, DC, USA 1: 14–21.Google Scholar
- 11.Nevatia R, Zhao T, Hongeng S: Hierarchical language-based representation of events in video streams. Proceedings of 2nd IEEE Workshop on Event Mining: Detection and Recognition of Events in Video, June 2003, Madison, Wis, USA 4: 39–45.Google Scholar
- 12.Syeda-Mahmood T, Vasilescu A, Sethi S: Recognizing action events from multiple viewpoints. Proceedings of IEEE Workshop on Detection and Recognition of Events in Video, July 2001, Vancouver, CanadaGoogle Scholar
- 15.Hamid R, Huang Y, Essa I: ARGMode—activity recognition using graphical models. Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '03), June 2003, Madison, Wis, USA 4: 38–43.Google Scholar
- 16.Vu V, Bremond F, Thonnat M: Automatic video interpretation: a novel algorithm for temporal scenario recognition. Proceedings of the 18th International Joint Conferences on Artificial Intelligence (IJCAI '03), August 2003, Acapulco, MexicoGoogle Scholar
- 17.Isard M, Blake A: A mixed-state condensation tracker with automatic model-switching. Proceedings of the 6th IEEE International Conference on Computer Vision (ICCV '98), January 1998, Bombay, India 107–112.Google Scholar
- 24.Zhong H, Shi J, Visontai M: Detecting unusual activity in video. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '04), June 2004, Washington, DC, USA 2: 819–826.Google Scholar
- 27.Lucas BD, Kanade T: An iterative image registration technique with an application to stereo vision. Proceedings of the 7th International Joint Conference on Artificial Intelligence (IJCAI '81), August 1981, Vancouver, BC, Canada 674–679.Google Scholar
- 35.DeNatale F, Mayora-Ibarra O, Prisciandaro L: Interactive home assistant for supporting elderly citizens. Proceedings of EUSAI Workshop on Ambient Intelligence Technologies for WellBeing at Home, November 2004, Eindhoven, The NetherlandsGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://doi.org/creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.