Advertisement

Extracting Motion Features for Visual Human Activity Representation

  • Filiberto Pla
  • Pedro Ribeiro
  • José Santos-Victor
  • Alexandre Bernardino
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3522)

Abstract

This paper presents a technique to characterize human actions in visual surveillance scenarios in order to describe, in a qualitative way, basic human movements in general imaging conditions. The representation proposed is based on focus of attention concepts, as part of an active tracking process to describe target movements. The introduced representation, named “focus of attention” representation, FOA, is based on motion information. A segmentation method is also presented to group the FOA in uniform temporal segments. The segmentation will allow providing a higher level description of human actions, by means of further classifying each segment in different types of basic movements.

Keywords

Receptive Field Optical Flow Video Shot Human Activity Recognition Segmented Target 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    BenAbdelkader, C., Cutler, R., Davis, L.: Motion-based recognition of people in EigenGait space. In: V Int. Conf. on Automatic Face Gesture Recognition (2002)Google Scholar
  2. 2.
    Bobick, A.F., Davis, J.W.: The recognition of human movement using temporal templates. IEEE. Trans. on PAMI 23(3), 257–267 (2001)Google Scholar
  3. 3.
    Bodor, R., Jackson, B., Papanikoloupolos, N.: Vision-based human tracking and activity recognition. In: XI Mediterranean Conf. on Control and Automation (2003)Google Scholar
  4. 4.
    Bouthemy, P., Gelgon, M., Ganansia, F.: A unified approach to shot change detection and camera motion characterization. IEEE Trans. on Circuits and Systems for Video Technology 9(7), 1030–1044 (1999)CrossRefGoogle Scholar
  5. 5.
    Bradski, G.R., Davis, J.W.: Motion segmentation and pose recognition with motion history gradients. Machine Vision and Applications 13, 174–184 (2002)CrossRefGoogle Scholar
  6. 6.
    Davis, J.W., Gao, H.: An expressive three-mode principal components model of human action style. Image and Vision Computing 21, 1001–1016 (2003)CrossRefGoogle Scholar
  7. 7.
    Davis, J.W., Tyagi, A.: A reliable-inference framework for recognition of human actions. In: IEEE Conf. on Advance Video and Signal Based Surveillance, pp. 169–176 (2003)Google Scholar
  8. 8.
    Essa, I.A., Pentland, A.P.: Coding, analysis, interpretation and recognition of facial expressions. IEEE Trans. on PAMI 19(7), 757–763 (1997)Google Scholar
  9. 9.
    Masoud, O., Papanikolopoulos, N.: Recognizing human activities. In: IEEE Conf. on Advanced Video and Signal Surveillance (2003)Google Scholar
  10. 10.
    Rui, Y., Anandan, P.: Segmenting visual actions based on spatio-temporal motion patterns. In: IEEE Int. Conf. on Computer Vision and Pattern Recognition (2000)Google Scholar
  11. 11.
    CAVIAR Project IST 2001 37540, http://homepages.inf.ed.ac.uk/rbf/CAVIAR/

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Filiberto Pla
    • 1
  • Pedro Ribeiro
    • 2
  • José Santos-Victor
    • 2
  • Alexandre Bernardino
    • 2
  1. 1.Computer Vision Group, Departament de Llenguatges i Sistemes InformàticsUniversitat Jaume ICastellónSpain
  2. 2.Computer Vision Lab VisLab, Instituto de Sistemas e RobóticaInstituto Superior TécnicoLisboaPortugal

Personalised recommendations