Advertisement

An action identification method based on FSM and limb dry weight

Abstract

This article mainly studies the motion recognition method used in the human-centered smart systems. First, we learn feature encoding sequences of the training sets, and then extract subactions from the learned sequences using a statistical model. Based on the hierarchical probabilistic context-free grammar characterization of limb sequences, we generate the grammatical rules of different actions according to the action training sets and characterize the actions and subactions using the finite state machine (FSM). To measure each limb sequence matching degree, we introduce a weight factor of the limb to perform gesture recognition. Also, we can get two sets of the compatibility of recognition probability when identifying the same limb movement sequence. Base on the endings, we derive the feature probability fusion formula of two sets of characteristics of limb movement sequence. Finally, we give the recognition results of two experiments and show the effect of the proposed method on several typical actions. The experimental results show that the proposed action recognition method applied on the same dataset has better recognition accuracy and less time cost over the other methods.

This is a preview of subscription content, log in to check access.

Access options

Buy single article

Instant unlimited access to the full article PDF.

US$ 39.95

Price includes VAT for USA

Subscribe to journal

Immediate online access to all issues from 2019. Subscription will auto renew annually.

US$ 99

This is the net price. Taxes to be calculated in checkout.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

References

  1. 1.

    Hancke G, de Carvalho e Silva B, Hancke G (2013) The role of advanced sensing in smart cities. Sensors 13(1):393–425

  2. 2.

    Luo X, Zhang D, Yang LT, et al. (2016) A kernel machine-based secure data sensing and fusion scheme in wireless sensor networks for the cyber-physical systems[J]. Futur Gener Comput Syst 61:85–96

  3. 3.

    Morello R, Mukhopadhyay SC, Liu Z, et al. (2017) Advances on sensing technologies for smart cities and power grids: a review[J]. IEEE Sensors J PP(99):1–1

  4. 4.

    Xu Y, Luo X, Wang W, et al. (2017) Efficient DV-HOP localization for wireless cyber-physical social sensing system: a correntropy-based neural network learning scheme:[J]. Sensors 17(1):135

  5. 5.

    Shotton J, Sharp T, Fitzgibbon A, Blake A, Cook M, Kipman A, Finocchio M, Moore R (2013) Real-time human pose recognition in parts from single depth images. Commun ACM 56(1):116124

  6. 6.

    Omelina L, Jansen B, Bonnechere B, Oravec M, Pavlovicova J, Jan SV (2016) Interaction detection with depth sensing and body tracking cameras in physical rehabilitation. Methods Inf Med 55(1):70–78

  7. 7.

    Lee MY, Han B, Jenkins C, Xing L, Suh TS (2016) A depth-sensing technique on 3D-printed compensator for total body irradiation patient measurement and treatment planning. Med Phys 43(11):6137–6144

  8. 8.

    Koppelhuber A, Bimber O (2017) Computational imaging, relighting and depth sensing using flexible thin-film sensors. Opt Express 25(3):2694–2702

  9. 9.

    Zhao W, Lun R, Gordon C, Fofana AM, Espy DD (2017) A human-centered activity tracking service: towards a healthier workplace. IEEE Transactions on Human-Machine Systems 47(3):343–355

  10. 10.

    Luo X, Xu Y, Wang W, Yuan M, Ban X, Zhu Y, Zhao W (2017) Towards enhancing stacked extreme learning machine with sparse autoencoder by correntropy Journal of the Franklin Institute. https://doi.org/10.1016/j.jfranklin.2017.08.014

  11. 11.

    Luo X, Deng J, Wang W, Wang JH, Zhao W (2017) A quantized kernel learning algorithm using a minimum kernel risk-sensitive loss criterion and bilateral gradient technique. Entropy 19(7):365. https://doi.org/10.3390/e19070365

  12. 12.

    Luo X, Deng J, Liu J, Wang W, Ban X, Wang JH (2017) A quantized kernel least mean square scheme with entropy-guided learning for intelligent data analysis. China Communications 14(7):127–136

  13. 13.

    Aggarwal JK, Xia L (2014) Human activity recognition from 3D data: a review. Pattern Recogn Lett 48:70–80

  14. 14.

    Antonio V, Erickson N, Gabriel O, Zicheng L, Mario C (2012) STOP: space-time occupancy patterns for 3d action recognition from depth map sequences, pp 252–259

  15. 15.

    Kviatkovsky I, Rivlin E, Shimshoni I (2014) Online action recognition using covariance of shape and motion. Comput Vis Image Underst 129:15–26

  16. 16.

    Siddharth N, Barbu A, Siskind JM (2014) Seeing what you’re told: sentence-guided activity recognition in video. In: 2014 IEEE conference on computer vision and pattern recognition, Columbus, OH, pp 732–739

  17. 17.

    Li S-Z, Yu B, Wu W, Su S-Z, Ji R-R (2015) Feature learning based on SAECPCA network for human gesture recognition in RGBD images. Neurocomputing 151(2):565–573

  18. 18.

    Du Y, Wang W, Wang L (2015) Hierarchical recurrent neural network for skeleton based action recognition. In: 2015 IEEE conference on computer vision and pattern recognition (CVPR), Boston, MA, pp 1110–1118

  19. 19.

    Chen C, Jafari R, Kehtarnavaz N (2016) A real-time human action recognition system using depth and inertial sensor fusion. IEEE Sensors J 16(3):773–781

  20. 20.

    Zhu W, Lan C, Xing J, Zeng W, Li Y, Li S, Xie X (2016) Co-occurrence feature learning for skeleton based action recognition using regularized deep LSTM networks. In: Proceedings of the thirtieth AAAI conference on artificial intelligence (AAAI’16). AAAI Press, pp 3697–3703

  21. 21.

    Gu Y, Do H, Ou Y, Sheng W (2012) Human gesture recognition through a kinect sensor. In: 2012 IEEE international conference on robotics and biomimetics (ROBIO), Guangzhou, pp 1379-1384

  22. 22.

    Lu X, Chia-Chih C, Aggarwal JK (2012) View invariant human action recognition using histograms of 3D joints. In: IEEE computer society conference on computer vision and pattern recognition workshops, pp 20–27. https://doi.org/10.1109/CVPRW.2012.6239233

  23. 23.

    Manning CD, Sch1tze H (1999) Foundations of statistical natural language processing. MIT Press, Cambridge

  24. 24.

    Chen C, Jafari R, Kehtarnavaz N (2015) UTD-MHAD: a multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor. In: 2015 IEEE international conference on image processing (ICIP), Quebec City, QC, pp 168–172

  25. 25.

    Brown PF, deSouza PV, Mercer RL, Della Pietra VJ, Lai JC (1992) Class-based n-gram models of natural language. Comput Linguist 18(4):467–479

Download references

Author information

Correspondence to Xiaojuan Ban.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Ban, X., Zhang, D., Sun, J. et al. An action identification method based on FSM and limb dry weight. Pers Ubiquit Comput (2020) doi:10.1007/s00779-019-01279-0

Download citation

Keywords

  • Limb movement
  • Finite state machine
  • Limb weight
  • Motion recognition