Distribution of action movements (DAM): a descriptor for human action recognition
- 57 Downloads
Human action recognition fromskeletal data is an important and active area of research in which the state of the art has not yet achieved near-perfect accuracy on many wellknown datasets. In this paper, we introduce the Distribution of Action Movements Descriptor, a novel action descriptor based on the distribution of the directions of the motions of the joints between frames, over the set of all possible motions in the dataset. The descriptor is computed as a normalized histogram over a set of representative directions of the joints, which are in turn obtained via clustering. While the descriptor is global in the sense that it represents the overall distribution of movement directions of an action, it is able to partially retain its temporal structure by applying a windowing scheme.
The descriptor, together with a standard classifier, outperforms several state-of-the-art techniques on many wellknown datasets.
Keywordshuman action recognition descriptor Prob-SOM MSRC12 Action3D
Unable to display preview. Download preview PDF.
- 1.Yao A, Gall J, Fanelli G, and Van Gool L J. Does human action recognition benefit from pose estimation? In: Proceedings of the British Machine Vision Conference. 2011Google Scholar
- 2.Estrebou C, Lanzarini L, Hasperué W. Voice recognition based on probabilistic SOM. In: Proceedings of XXXVI Congreso Latinoamericano de Informática (CLEI). 2010Google Scholar
- 3.Hussein M E, Torki M, Gowayyed M A, El-Saban M. Human action recognition using a temporal hierarchy of covariance descriptors on 3D joint locations. In: Proceedings of the 23rd International Joint Conference on Artificial Intelligence. 2013, 2466–2472Google Scholar
- 5.Gowayyed M A, Torki M, Hussein M E, El-Saban M. Histogram of oriented displacements (HOD): describing trajectories of human joints for action recognition. In: Proceedings of the 23rd International Joint Conference on Artificial Intelligence. 2013, 1351–1357Google Scholar
- 7.Cho K, Chen X. Classifying and visualizing motion capture sequences using deep neural networks. 2013, arXiv:1306.3874Google Scholar
- 8.Li W Q, Zhang Z Y, Liu Z C. Action recognition based on a bag of 3D points. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. 2010, 9–14Google Scholar
- 10.Fothergill S, Mentis H, Kohli P, Nowozin S. Instructing people for training gestural interactive systems. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2012, 1737–1746Google Scholar
- 12.Wang J, Liu Z C, Chorowski J, Chen Z Y, Wu Y. Robust 3D action recognition with random occupancy patterns. In: Proceedings of the 12th European Conference on Computer Vision. 2012, 872–885Google Scholar
- 13.Wang J, Liu Z C, Wu Y, Yuan J S. Mining actionlet ensemble for action recognition with depth cameras. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2012, 1290–1297Google Scholar
- 15.Jiang X B, Zhong F, Peng Q S, Qin X Y. Robust action recognition based on a hierarchical model. In: Proceedings of IEEE International Conference on Cyberworlds. 2013, 191–198Google Scholar