Advertisement

International Journal of Social Robotics

, Volume 11, Issue 2, pp 219–234 | Cite as

Skeleton-Based Human Action Recognition by Pose Specificity and Weighted Voting

  • Tingting Liu
  • Jiaole WangEmail author
  • Seth Hutchinson
  • Max Q.-H. MengEmail author
Article
  • 336 Downloads

Abstract

This paper introduces a human action recognition method based on skeletal data captured by Kinect or other depth sensors. After a series of pre-processing, action features such as position, velocity, and acceleration have been extracted from each frame to capture both dynamic and static information of human motion, which can make full use of the human skeletal data. The most challenging problem in skeleton-based human action recognition is the large variability within and across subjects. To handle this problem, we propose to divide human poses into two major categories: the discriminating pose and the common pose. A pose specificity metric has been proposed to quantify the discriminative level of different poses. Finally, the action recognition is actualized by a weighted voting method. This method uses the k nearest neighbors found from the training dataset for voting and uses the pose specificity as the weight of a ballot. Experiments on two benchmark datasets have been carried out, the results have illustrated that the proposed method outperforms the state-of-the-art methods.

Keywords

Action recognition Human skeleton Pose specificity Weighted voting 

Notes

Funding

This study was partly funded by RGC (GRF # 14205914, GRF # 14210117), in part by the Shenzhen Science and Technology Innovation projects # JCYJ20170413-161616163 awarded to Max Q.-H. Meng.

Compliance with Ethical Standards

Conflict of Interest

The authors declare that they have no conflict of interest.

References

  1. 1.
    Aggarwal JK, Ryoo MS (2011) Human activity analysis: a review. ACM Comput Surv 43(3):16:1–16:43Google Scholar
  2. 2.
    Aggarwal JK, Xia L (2014) Human activity recognition from 3D data: a review. Pattern Recognit Lett 48(Supplement C):70–80Google Scholar
  3. 3.
    Amor BB, Su J, Srivastava A (2016) Action recognition using rate-invariant analysis of skeletal shape trajectories. IEEE Trans Pattern Anal Mach Intell 38(1):1–13Google Scholar
  4. 4.
    Bobick AF, Davis JW (2001) The recognition of human movement using temporal templates. IEEE Trans Pattern Anal Mach Intell 23(3):257–267Google Scholar
  5. 5.
    Chaaraoui AA, Climent-Pérez P, Flórez-Revuelta F (2012) An efficient approach for multi-view human action recognition based on bag-of-key-poses. In: International workshop on human behavior understanding, Springer, Berlin, pp 29–40Google Scholar
  6. 6.
    Chaaraoui AA, Padilla-López JR, Climent-Pérez P, Flórez-Revuelta F (2014) Evolutionary joint selection to improve human action recognition with RGB-D devices. Expert Syst Appl 41(3):786–794Google Scholar
  7. 7.
    Chaquet JM, Carmona EJ, Fernández-Caballero A (2013) A survey of video datasets for human action and activity recognition. Comput Vis Image Underst 117(6):633–659Google Scholar
  8. 8.
    Chen C, Liu K, Kehtarnavaz N (2016) Real-time human action recognition based on depth motion maps. J Real-Time Image Process 12:155–163Google Scholar
  9. 9.
    Cippitelli E, Gasparrini S, Gambi E, Spinsante S (2016) A human activity recognition system using skeleton data from RGBD sensors. Intell Neurosci 2016:21–34Google Scholar
  10. 10.
    Ding W, Liu K, Cheng F, Zhang J (2016) Learning hierarchical spatio-temporal pattern for human activity prediction. J Vis Commun Image Represent 35(Supplement C):103–111Google Scholar
  11. 11.
    Ding W, Liu K, Fu X, Cheng F (2016) Profile hmms for skeleton-based human action recognition. Signal Process Image Commun 42:109–119Google Scholar
  12. 12.
    Du Y, Wang W, Wang L (2015) Hierarchical recurrent neural network for skeleton based action recognition. In: The IEEE conference on computer vision and pattern recognition (CVPR), IEEE, pp 1110–1118Google Scholar
  13. 13.
    Du Y, Fu Y, Wang L (2016) Representation learning of temporal dynamics for skeleton-based action recognition. IEEE Trans Image Process 25(7):3010–3022MathSciNetzbMATHGoogle Scholar
  14. 14.
    Eweiwi A, Cheema MS, Bauckhage C, Gall J (2014) Efficient pose-based action recognition. In: Asian conference on computer vision (ACCV), Springer, Berlin, pp 428–443Google Scholar
  15. 15.
    Faria DR, Premebida C, Nunes U (2014) A probabilistic approach for human everyday activities recognition using body motion from RGB-D images. In: The 23rd IEEE international symposium on robot and human interactive communication, IEEE, pp 732–737Google Scholar
  16. 16.
    Gowayyed MA, Torki M, Hussein ME, El-Saban M (2013) Histogram of oriented displacements (hod): describing trajectories of human joints for action recognition. In: International joint conference on artificial intelligence, AAAI Press, pp 1351–1357Google Scholar
  17. 17.
    Gupta R, Chia AYS, Rajan D (2013) Human activities recognition using depth images. In: Proceedings of the 21st ACM international conference on multimedia, ACM, pp 283–292Google Scholar
  18. 18.
    Jiang M, Kong J, Bebis G, Huo H (2015) Informative joints based human action recognition using skeleton contexts. Signal Process Image Commun 33(Supplement C):29–40Google Scholar
  19. 19.
    Joo SW, Chellappa R (2006) Attribute grammar-based event recognition and anomaly detection. In: 2006 conference on computer vision and pattern recognition workshop (CVPRW’06), IEEE, pp 107–107Google Scholar
  20. 20.
    Ke SR, Thuc HLU, Lee YJ, Hwang JN, Yoo JH, Choi KH (2013) A review on video-based human activity recognition. Computers 2(2):88–131Google Scholar
  21. 21.
    Ke Y, Sukthankar R, Hebert M (2007) Spatio-temporal shape and flow correlation for action recognition. In: 2007 IEEE conference on computer vision and pattern recognition, IEEE, pp 1–8Google Scholar
  22. 22.
    Kitani KM, Sato Y, Sugimoto A (2007) Recovering the basic structure of human activities from a video-based symbol string. In: IEEE workshop on motion and video computing, 2007. WMVC’07, IEEE, pp 9–9Google Scholar
  23. 23.
    Koppula HS, Gupta R, Saxena A (2013) Learning human activities and object affordances from RGB-D videos. Int J Robot Res 32(8):951–970Google Scholar
  24. 24.
    Lai RYQ, Yuen PC, Lee KKW (2011) Motion capture data completion and denoising by singular value thresholding. In: Proceedings of Eurographics, pp 45–48Google Scholar
  25. 25.
    Li W, Zhang Z, Liu Z (2010) Action recognition based on a bag of 3D points. In: 2010 IEEE computer society conference on computer vision and pattern recognition—workshops, IEEE, pp 9–14Google Scholar
  26. 26.
    Lublinerman R, Ozay N, Zarpalas D, Camps O (2006) Activity recognition from silhouettes using linear systems and model (in) validation techniques. In: 18th international conference on pattern recognition (ICPR’06), IEEE, vol 1, pp 347–350Google Scholar
  27. 27.
    Ni B, Pei Y, Moulin P, Yan S (2013) Multilevel depth and image fusion for human activity detection. IEEE Trans Cybern 43(5):1383–1394Google Scholar
  28. 28.
    Ofli F, Chaudhry R, Kurillo G, Vidal R, Bajcsy R (2014) Sequence of the most informative joints (SMIJ): a new representation for human skeletal action recognition. J Vis Commun Image Represent 25(1):24–38Google Scholar
  29. 29.
    Parisi GI, Weber C, Wermter S (2015) Self-organizing neural integration of pose-motion features for human action recognition. Front Neurorobot 9:3Google Scholar
  30. 30.
    Piyathilaka L, Kodagoda S (2013) Gaussian mixture based hmm for human daily activity recognition using 3D skeleton features. In: 2013 IEEE 8th conference on industrial electronics and applications (ICIEA), IEEE, pp 567–572Google Scholar
  31. 31.
    Ryoo MS, Aggarwal JK (2006) Recognition of composite human activities through context-free grammar based representation. In: 2006 IEEE computer society conference on computer vision and pattern recognition (CVPR’06), IEEE, vol 2, pp 1709–1718Google Scholar
  32. 32.
    Ryoo MS, Aggarwal JK (2009) Spatio-temporal relationship match: video structure comparison for recognition of complex human activities. In: 2009 IEEE 12th international conference on computer vision, IEEE, pp 1593–1600Google Scholar
  33. 33.
    Shan J, Akella S (2014) 3D human action segmentation and recognition using pose kinetic energy. In: 2014 IEEE international workshop on advanced robotics and its social impacts, IEEE, pp 69–75Google Scholar
  34. 34.
    Shotton J, Sharp T, Kipman A, Fitzgibbon A, Finocchio M, Blake A, Cook M, Moore R (2013) Real-time human pose recognition in parts from single depth images. Commun ACM 56(1):116–124Google Scholar
  35. 35.
    Sparck Jones K (1972) A statistical interpretation of term specificity and its application in retrieval. J Doc 28(1):11–21Google Scholar
  36. 36.
    Srivastava A, Turaga P, Kurtek S (2012) On advances in differential-geometric approaches for 2D and 3D shape analyses and activity recognition. Image Vis Comput 30(6):398–416Google Scholar
  37. 37.
    Sung J, Ponce C, Selman B, Saxena A (2012) Unstructured human activity detection from RGBD images. In: 2012 IEEE international conference on robotics and automation, IEEE, pp 842–849Google Scholar
  38. 38.
    Tao L, Vidal R (2015) Moving poselets: A discriminative and interpretable skeletal motion representation for action recognition. In: The IEEE international conference on computer vision (ICCV) workshops, pp 61–69Google Scholar
  39. 39.
    Thanh TT, Chen F, Kotani K, Le B (2014) Extraction of discriminative patterns from skeleton sequences for accurate action recognition. Fundam Inform 130(2):247–261Google Scholar
  40. 40.
    Veeraraghavan A, Chellappa R, Roy-Chowdhury AK (2006) The function space of an activity. In: 2006 IEEE computer society conference on computer vision and pattern recognition (CVPR’06), IEEE, vol 1, pp 959–968Google Scholar
  41. 41.
    Vemulapalli R, Arrate F, Chellappa R (2014) Human action recognition by representing 3D skeletons as points in a lie group. In: The IEEE conference on computer vision and pattern recognition (CVPR), IEEE, pp 588–595Google Scholar
  42. 42.
    Wang Y, Shi Y, Wei G (2017) A novel local feature descriptor based on energy information for human activity recognition. Neurocomputing 228(Supplement C):19–28Google Scholar
  43. 43.
    Yang X, Tian Y (2014) Effective 3D action recognition using eigenjoints. J Vis Commun Image Represent 25(1):2–11MathSciNetGoogle Scholar
  44. 44.
    Yang X, Tian YL (2012) Eigenjoints-based action recognition using naive-bayes-nearest-neighbor. In: 2012 IEEE computer society conference on computer vision and pattern recognition workshops, IEEE, pp 14–19Google Scholar
  45. 45.
    Yu E, Aggarwal JK (2006) Detection of fence climbing from monocular video. In: 18th international conference on pattern recognition (ICPR’06), IEEE, vol 1, pp 375–378Google Scholar
  46. 46.
    Zhang C, Tian Y (2012) RGB-D camera-based daily living activity recognition. J Comput Vis Image Process 2(4):1–7Google Scholar
  47. 47.
    Zhang D, Gatica-Perez D, Bengio S, McCowan IA, Lathoud G (2006) Modeling individual and group actions in meetings with layered hmms. IEEE Trans Multimed 8(3):509–520Google Scholar
  48. 48.
    Zhang Z (2012) Microsoft kinect sensor and its effect. IEEE MultiMed 19(2):4–10Google Scholar
  49. 49.
    Zhu G, Zhang L, Shen P, Song J (2016) Human action recognition using multi-layer codebooks of key poses and atomic motions. Signal Process Image Commun 42:19–30Google Scholar
  50. 50.
    Zhu G, Zhang L, Shen P, Song J (2016) An online continuous human action recognition algorithm based on the kinect sensor. Sensors 16(2):161Google Scholar
  51. 51.
    Zhu Y, Chen W, Guo G (2014) Evaluating spatiotemporal interest point features for depth-based action recognition. Image and Vis Comput 32(8):453–464Google Scholar

Copyright information

© Springer Nature B.V. 2018

Authors and Affiliations

  1. 1.Department of Electronic EngineeringThe Chinese University of Hong KongShatin, N.T.China
  2. 2.Department of Electrical and Computer EngineeringUniversity of Illinois at Urbana-ChampaignUrbanaUSA
  3. 3.The Shenzhen Research InstituteChinese University of Hong Kong in ShenzhenShenzhenChina

Personalised recommendations