Frontiers of Computer Science

, Volume 9, Issue 6, pp 956–965 | Cite as

Distribution of action movements (DAM): a descriptor for human action recognition

  • Franco Ronchetti
  • Facundo Quiroga
  • Laura Lanzarini
  • Cesar Estrebou
Research Article


Human action recognition fromskeletal data is an important and active area of research in which the state of the art has not yet achieved near-perfect accuracy on many wellknown datasets. In this paper, we introduce the Distribution of Action Movements Descriptor, a novel action descriptor based on the distribution of the directions of the motions of the joints between frames, over the set of all possible motions in the dataset. The descriptor is computed as a normalized histogram over a set of representative directions of the joints, which are in turn obtained via clustering. While the descriptor is global in the sense that it represents the overall distribution of movement directions of an action, it is able to partially retain its temporal structure by applying a windowing scheme.

The descriptor, together with a standard classifier, outperforms several state-of-the-art techniques on many wellknown datasets.


human action recognition descriptor Prob-SOM MSRC12 Action3D 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Yao A, Gall J, Fanelli G, and Van Gool L J. Does human action recognition benefit from pose estimation? In: Proceedings of the British Machine Vision Conference. 2011Google Scholar
  2. 2.
    Estrebou C, Lanzarini L, Hasperué W. Voice recognition based on probabilistic SOM. In: Proceedings of XXXVI Congreso Latinoamericano de Informática (CLEI). 2010Google Scholar
  3. 3.
    Hussein M E, Torki M, Gowayyed M A, El-Saban M. Human action recognition using a temporal hierarchy of covariance descriptors on 3D joint locations. In: Proceedings of the 23rd International Joint Conference on Artificial Intelligence. 2013, 2466–2472Google Scholar
  4. 4.
    Barnachon M, Bouakaz S, Boufama B, Guillou E. Ongoing human action recognition with motion capture. Pattern Recognition, 2014, 47(1): 238–247CrossRefGoogle Scholar
  5. 5.
    Gowayyed M A, Torki M, Hussein M E, El-Saban M. Histogram of oriented displacements (HOD): describing trajectories of human joints for action recognition. In: Proceedings of the 23rd International Joint Conference on Artificial Intelligence. 2013, 1351–1357Google Scholar
  6. 6.
    Ofli F, Chaudhry R, Kurillo G, Vidal R, Bajcsy R. Sequence of the most informative joints (SMIJ): a new representation for human skeletal action recognition. Journal of Visual Communication and Image Representation, 2014, 25(1): 24–38CrossRefGoogle Scholar
  7. 7.
    Cho K, Chen X. Classifying and visualizing motion capture sequences using deep neural networks. 2013, arXiv:1306.3874Google Scholar
  8. 8.
    Li W Q, Zhang Z Y, Liu Z C. Action recognition based on a bag of 3D points. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. 2010, 9–14Google Scholar
  9. 9.
    Kohonen T. The self-organizing map. Neurocomputing, 1998, 21(1): 1–6zbMATHCrossRefGoogle Scholar
  10. 10.
    Fothergill S, Mentis H, Kohli P, Nowozin S. Instructing people for training gestural interactive systems. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2012, 1737–1746Google Scholar
  11. 11.
    Ellis C, Masood S Z, Tappen MF, Laviola Jr J J, Sukthankar R. Exploring the trade-off between accuracy and observational latency in action recognition. International Journal of Computer Vision, 2013, 101(3): 420–436CrossRefGoogle Scholar
  12. 12.
    Wang J, Liu Z C, Chorowski J, Chen Z Y, Wu Y. Robust 3D action recognition with random occupancy patterns. In: Proceedings of the 12th European Conference on Computer Vision. 2012, 872–885Google Scholar
  13. 13.
    Wang J, Liu Z C, Wu Y, Yuan J S. Mining actionlet ensemble for action recognition with depth cameras. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2012, 1290–1297Google Scholar
  14. 14.
    Negin F, Özdemir F, Akgül C B, Yüksel K A, Erçil A. A decision forest based feature selection framework for action recognition from RGB-Depth cameras. Lecture Notes in Computer Science, 2013, 7950: 648–657CrossRefGoogle Scholar
  15. 15.
    Jiang X B, Zhong F, Peng Q S, Qin X Y. Robust action recognition based on a hierarchical model. In: Proceedings of IEEE International Conference on Cyberworlds. 2013, 191–198Google Scholar

Copyright information

© Higher Education Press and Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  • Franco Ronchetti
    • 1
  • Facundo Quiroga
    • 1
  • Laura Lanzarini
    • 1
  • Cesar Estrebou
    • 1
  1. 1.Instituto de Investigacion en Informatica III-LIDI, Facultad de InformaticaUniversidad Nacional de La PlataLa PlataArgentina

Personalised recommendations