Advertisement

Study on Human Body Action Recognition

  • Dong Yin
  • Yu-Qing Miao
  • Kang Qiu
  • An Wang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10996)

Abstract

A novel human body action recognition method based on Kinect is proposed. Firstly, the key frame of the original data is extracted by using the key frame extraction technology based on quaternion. Secondly, the moving pose feature based on the motion information of each joint point is constituted for the skeleton information of each key frame. And, combined with key frame, online continuous action segmentation is implemented by using boundary detection method. Finally, the feature is encoded by Fisher vector and input to the linear SVM classifier to complete the action recognition. In the public dataset MSR Action3D and the dataset collected in this paper, the experiments show that the proposed method achieves a good recognition effect.

Keywords

Action recognition Kinect Support vector machine Fisher vector 

Notes

Acknowledgments

This paper is supported by the Guangxi Natural Science Foundation Project (2014GXNSFAA118395), the research project of Guangxi Colleges & Universities Key Laboratory of Intelligent Processing of Image and Graphics (GIIP201706), the National Natural Science Foundation Project (61763007), the key project of the Guangxi Natural Science Foundation (2017GXNSFDA198028).

References

  1. 1.
    Wang, Y., Li, Y., Ji, X.: Human action recognition based on global gist feature and local patch coding. Int. J. Signal Process. Image Process. Pattern Recognit. 8(2), 235–246 (2015)Google Scholar
  2. 2.
    Kwak, N., Song, T.: Human action recognition using accumulated moving information. Int. J. Multimed. Ubiquitous Eng. 10(10), 211–222 (2015)CrossRefGoogle Scholar
  3. 3.
    Vinagre, M., Aranda, J., Casals, A.: A new relational geometric feature for human action recognition. In: Ferrier, J.-L., Gusikhin, O., Madani, K., Sasiadek, J. (eds.) Informatics in Control, Automation and Robotics. LNEE, vol. 325, pp. 263–278. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-10891-9_15CrossRefGoogle Scholar
  4. 4.
    He, J.: Self-taught learning features for human action recognition. In: Proceedings of 2016 3rd International Conference on Information Science and Control Engineering, ICISCE 2016, pp. 611–615, 31 October (2016)Google Scholar
  5. 5.
    Dos Dawn, D., Shaikh, S.: A comprehensive survey of human action recognition with spatio-temporal interest point (STIP) detector. Visual Comput. 32(3), 289–306 (2016)Google Scholar
  6. 6.
    Ijjina, E., Chalavadi, K.: Human action recognition using genetic algorithms and convolutional neural networks. Pattern Recognit. 59, 199–212 (2016)Google Scholar
  7. 7.
    Wu, D., Sharma, N., Blumenstein, M.: Recent advances in video-based human action recognition using deep learning: a review. In: 2017 International Joint Conference on Neural Networks, IJCNN 2017–Proceedings, vol. 2017-May, pp. 2865–2872, 30 June (2017)Google Scholar
  8. 8.
    Sargano, A., Wang, X., Angelov, P., Habib, Z.: Human action recognition using transfer learning with deep representations. In: Proceedings of the International Joint Conference on Neural Networks, vol. 2017-May, pp. 463–469, 30 June (2017)Google Scholar
  9. 9.
    Ma, M., Marturi, N., Li, Y., Leonardis, A., Stolkin, R.: Region-sequence based six-stream CNN features for general and fine-grained human action recognition in videos. Pattern Recognit. 76, 506–521 (2018)Google Scholar
  10. 10.
    Ofli, F., Chaudhry, R., Kurillo, G.: Sequence of the most informative joints (SMIJ): a new representation for human skeletal action recognition. J. Vis. Commun. Image Represent. 25(1), 24–38 (2014)CrossRefGoogle Scholar
  11. 11.
    Zanfir, M., Leordeanu, M., Sminchisescu, C.: The moving pose: an efficient 3D kinematics descriptor for low-latency action recognition and detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2752–2759 (2014)Google Scholar
  12. 12.
    Xia, L., Chen, C., Aggarwal, J.: View invariant human action recognition using histograms of 3D joints. In: 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 20–27 (2012)Google Scholar
  13. 13.
    Yang, X., Tian, Y.: Effective 3D action recognition using eigenjoints. J. Vis. Commun. Image Represent. 25(1), 2–11 (2014)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Evangelidis, G., Singh, G., Horaud, R.: Skeletal quads: human action recognition using joint quadruples. In: 2014 22nd International Conference on Pattern Recognition (ICPR), pp. 4513–4518 (2014)Google Scholar
  15. 15.
    Vemulapalli, R., Arrate, F., Chellappa, R.: Human action recognition by representing 3D skeletons as points in a lie group. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 588–595 (2014)Google Scholar
  16. 16.
    Wang, J., Liu, Z., Wu, Y.: Mining actionlet ensemble for action recognition with depth cameras. 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1290–1297 (2014)Google Scholar
  17. 17.
    Tang, S., et al.: Histogram of oriented normal vectors for object recognition with a depth sensor. In: Lee, K.M., Matsushita, Y., Rehg, James M., Hu, Z. (eds.) ACCV 2012. LNCS, vol. 7725, pp. 525–538. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-37444-9_41CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.School of Information Science and TechnologyUSTCHefeiChina
  2. 2.Key Laboratory of Electromagnetic Space Information of CASHefeiChina
  3. 3.School of Computer Science and Information SecurityGUETGuilinChina
  4. 4.Key Laboratory of Intelligent Processing of Image and GraphicsGUETGuilinChina

Personalised recommendations