Advertisement

Action Recognition Using Motion Primitives and Probabilistic Edit Distance

  • P. Fihl
  • M. B. Holte
  • T. B. Moeslund
  • L. Reng
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4069)

Abstract

In this paper we describe a recognition approach based on the notion of primitives. As opposed to recognizing actions based on temporal trajectories or temporal volumes, primitive-based recognition is based on representing a temporal sequence containing an action by only a few characteristic time instances. The human whereabouts at these instances are extracted by double difference images and represented by four features. In each frame the primitive, if any, that best explains the observed data is identified. This leads to a discrete recognition problem since a video sequence will be converted into a string containing a sequence of symbols, each representing a primitives. After pruning the string a probabilistic Edit Distance classifier is applied to identify which action best describes the pruned string. The approach is evaluated on five one-arm gestures and the recognition rate is 91.3%. This is concluded to be a promising result but also leaves room for further improvements.

Keywords

Action Recognition Mahalanobis Distance Edit Distance Visual Hull Motion Primitive 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Babu, R.V., Ramakrishnan, K.R.: Compressed domain human motion recognition using motion history information. In: Proc. Int. Conf. on Acoustics, Speech and Signal Processing, Hong Kong, April 6-10 (2003)Google Scholar
  2. 2.
    Barbic, J., Pollard, N.S., Hodgins, J.K., Faloutsos, C., Pan, J.-Y., Safonova, A.: Segmenting Motion Capture Data into Distinct Behaviors. In: Graphics Interface, London, Ontario, Canada, May 17-19 (2004)Google Scholar
  3. 3.
    Bettinger, F., Cootes, T.F.: A Model of Facial Behaviour. In: IEEE International Conference on Automatic Face and Gesture Recognition, Seoul, Korea, May 17-19 (2004)Google Scholar
  4. 4.
    Bobick, A., Davis, J.: The Recognition of Human Movement Using Temporal Templates. IEEE Trans. Pattern Analysis and Machine Intelligence 23(3), 257–267 (2001)CrossRefGoogle Scholar
  5. 5.
    Bobick, A.F., Davis, J.: A Statebased Approach to the Representation and Recognition of Gestures. IEEE Trans. on Pattern Analysis and Machine Intelligence 19(12), 1325–1337 (1997)CrossRefGoogle Scholar
  6. 6.
    Bregler, C.: Learning and Recognizing Human Dynamics in Video Sequences. In: Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico, pp. 568–574 (1997)Google Scholar
  7. 7.
    Campbell, L., Bobick, A.: Recognition of Human Body Motion Using Phase Space Constraints. In: International Conference on Computer Vision, Cambridge, Massachusetts (1995)Google Scholar
  8. 8.
    González, J., Varona, J., Roca, F.X., Villanueva, J.J.: aSpaces: Action spaces for recognition and synthesis of human actions. In: Perales, F.J., Hancock, E.R. (eds.) AMDO 2002. LNCS, vol. 2492, pp. 189–200. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  9. 9.
    Jenkins, O.C., Mataric, M.J.: Deriving Action and Behavior Primitives from Human Motion Data. In: Proc. IEEE Int. Conf. on Intelligent Robots and Systems, Lausanne, Switzerland, September 30–October 4, 2002, pp. 2551–2556 (2002)Google Scholar
  10. 10.
    Just, A., Marcel, S.: HMM and IOHMM for the Recognition of Mono- and Bi-Manual 3D Hand Gestures. In: ICPR workshop on Visual Observation of Deictic Gestures (POINTING 2004), Cambridge, UK (August 2004)Google Scholar
  11. 11.
    Kale, A., Cuntoor, N., Chellappa, R.: A Framework for Activity-Specific Human Recognition. In: International Conference on Acoustics, Speech and Signal Processing, Orlando, Florida (May 2002)Google Scholar
  12. 12.
    Levenshtein, V.I.: Binary Codes Capable of Correcting Deletions, Insertions and Reversals. Doklady Akademii Nauk SSSR 163(4), 845–848 (1965)MathSciNetGoogle Scholar
  13. 13.
    Rao, C., Yilmaz, A., Shah, M.: View-Invariant Representation and Recognition of Actions. Journal of Computer Vision 50(2), 55–63 (2002)CrossRefGoogle Scholar
  14. 14.
    Reng, L., Moeslund, T.B., Granum, E.: Finding Motion Primitives in Human Body Gestures. In: Gibet, S., Courty, N., Kamp, J.-F. (eds.) GW 2005. LNCS (LNAI), vol. 3881, pp. 133–144. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  15. 15.
    http://polhemus.com/ (January 2006)
  16. 16.
    Weinberg, D., Ronfard, R., Boyer, E.: Motion History Volumes for Free Viewpoint Action Recognition. In: IEEE Int. Workshop on Modeling People and Human Interaction (2005)Google Scholar
  17. 17.
    Yilmaz, A., Shah, M.: Actions Sketch: A Novel Action Representation. In: Proc. IEEE Conf. on Computer Vision and Pattern Recognition, San Diego, CA, June 20-25 (2005)Google Scholar
  18. 18.
    Yoshinari, K., Michihito, M.: A Human Motion Estimation Method using 3-Successive Video Frames. In: Int. Conf. on Virtual Systems and Multimedia, Gifu, Japan (1996)Google Scholar
  19. 19.
    Yu, H., Sun, G.-M., Song, W.-X., Li, X.: Human Motion Recognition Based on Neural Networks. In: Int. Conf. on Communications, Circuits and Systems, Hong Kong (May 2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • P. Fihl
    • 1
  • M. B. Holte
    • 1
  • T. B. Moeslund
    • 1
  • L. Reng
    • 1
  1. 1.Laboratory of Computer Vision and Media TechnologyAalborg UniversityDenmark

Personalised recommendations