Skip to main content
Log in

Orientation Invariant Skeleton Feature (OISF): a new feature for Human Activity Recognition

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Human Activity Recognition is the process of identifying the activity of a person by analyzing continuous frames of a video. In many application areas, human activity identification is either a direct goal or it is a key segment of a bigger objective. Some of the examples are surveillance system, elder healthcare monitoring system, abnormal activity detection systems such as fight detection, theft detection etc. Robust and accurate activity recognition is a challenging task due to diverse reasons, such as changing ambient illumination, noise, background turbulence, camera placements etc. Existing literatures discuss some techniques for identifying human activity but these approaches are restricted to the case of videos recorded from static camera. The aim of the proposed approach is to fill this gap. In this proposed method, a new skeleton based feature for human activity recognition- “Orientation Invariant Skeleton Feature (OISF)”- is introduced and used to train Random Forest (RF) classifier for Human Activity Recognition. Efficiency of newly introduced feature OISF is analyzed for the videos recorded with multiple cameras positioned at two different slant angles. Experimental results reveal that the newly introduced feature OISF has minimal dependency on variations of camera orientation. Accuracy achieved is ≈ 99.30% with ViHASi dataset, ≈ 96.85% with KTH dataset and ≈ 98.34% with in-house dataset which is higher than those achieved by other researches with existing features. The improved result of human activity recognition in terms of accuracy proves the appropriateness of the proposed research in being used commercially.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  1. Agarwal JK, Ryoo MS (2011) Human activity analysis: a review. ACM Comput Surv (CSUR) 43(3): 1–43

    Google Scholar 

  2. Anjum ML, Rosa S, Bona B (2017) Tracking a subset of skeleton joints: an effective approach towards complex human activity recognition. Journal of Robotics

  3. Bächlin M, Forster K, Troster G (2009) SwimMaster: a wearable assistant for swimmer. In: Proceedings of the 11th international conference on ubiquitous computing, pp 215–224

  4. Breiman L (2001) Random forests. Mach Learn 45(1):5–32

    MATH  Google Scholar 

  5. Breiman L (1996) Bagging predictors. Mach Learn 24(2):123–140

    MATH  Google Scholar 

  6. Chen MY, Hauptmann A (2009) Mosift: recognizing human actions in surveillance videos. Citeseer

  7. Singh DK, Kushwaha DS (2016) Tracking movements of humans in a real-time surveillance scene. In: Proceedings of fifth international conference on soft computing for problem solving, pp 491–500

  8. Dawn DD, Shaikh SH (2016) A comprehensive survey of human action recognition with spatio-temporal interest point (STIP) detector. Vis Comput Springer 32(3):289–306

    Google Scholar 

  9. Du Y, Wang W, Wang L (2015) Hierarchical recurrent neural network for skeleton based action recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1110–1118

  10. Gilbert A, Illingworth J, Bowden R (2009) Fast realistic multi-action recognition using mined dense spatio-temporal features. In: 2009 IEEE 12th international conference on computer vision, pp 925–931

  11. Hbali Y, Hbali S, Ballihi L, Sadgal M (2017) Skeleton-based human activity recognition for elderly monitoring systems. IET Comput Vis 12(1):16–26

    Google Scholar 

  12. Ho TK (1995) Random decision forests. In: Proceedings of 3rd international conference on document analysis and recognition, vol 1. IEEE, pp 278–282

  13. Jalal A, Uddin MZ, Kim JT, Kim TS (2012) Recognition of human home activities via depth silhouettes and R transformation for smart homes. Indoor Built Environ 21(1):184–190

    Google Scholar 

  14. Jalal A, Kamal S, Kim D (2017) A depth video-based human detection and activity recognition using multi-features and embedded hidden Markov models for health care monitoring systems. Int J Interact Multimed Artif Intell 4:4

    Google Scholar 

  15. Jalaland A, Kamal S (2014) Real-time life logging via a depth silhouette-based human activity recognition system for smart home services. In: 2014 11th IEEE International conference on advanced video and signal based surveillance (AVSS), pp 74–80

  16. Kovashka A, Grauman K (2010) Learning a hierarchy of discriminative space-time neighborhood features for human action recognition. In: 2010 IEEE computer society conference on computer vision and pattern recognition, pp 2046–2053

  17. Kumar S, Kumar S, Raman B, Sukavanam N (2011) Human action recognition in a wide and complex environment. Real-Time Image Video Process 7871:78710I

    Google Scholar 

  18. Lassoued I, Zagrouba E (2018) Human actions recognition: an approach based on stable motion boundary fields. Multimed Tools Appl 77(16):20715–20729

    Google Scholar 

  19. Li M, Leung H (2016) Multiview skeletal interaction recognition using active joint interaction graph. IEEE Trans Multimed 18(11):2293–2302

    Google Scholar 

  20. Lu M, Zhang L (2014) Action recognition by fusing spatial-temporal appearance and the local distribution of interest points. In: International conference on future computer and communication engineering (ICFCCE 2014)

  21. Manresa C, Varona J, Mas R, Perales FJ (2005) Hand tracking and gesture recognition for human-computer interaction. ELCVIA Electron Lett Comput Vis Image Anal 5(3):96–104

    Google Scholar 

  22. Manzi A, Fiorini L, Limosani R, Dario P, Cavallo F (2017) Two-person activity recognition using skeleton data. IET Comput Vis 12(1):27–35

    Google Scholar 

  23. Min W, Cui H, Rao H, Li ZZ, Yao L (2018) Detection of human falls on furniture using scene analysis based on deep learning and activity characteristics. IEEE Access 6:9324–9335

    Google Scholar 

  24. Naveed H, Khan G, Khan AU, Siddiqi A, Khan MUG (2019) Human activity recognition using mixture of heterogeneous features and sequential minimal optimization. Int J Mach Learn Cybern 10(9):2329–2340

    Google Scholar 

  25. Ofli F, Chaudhry R, Kurillo G, Vidal R, Bajcsy R (2014) Sequence of the most informative joints (SMIJ): a new representation for human skeletal action recognition. J Vis Commun Image Represent 25(1):24–38

    Google Scholar 

  26. Quaid MAK, Jalal A (2019) Wearable sensors based human behavioral pattern recognition using statistical features and reweighted genetic algorithm. Multimed Tools Appl, 1–23

  27. Ragheb H, Velastin S, Remagnino P, Ellis T (2008) ViHASi: virtual human action silhouette data for the performance evaluation of silhouette-based action recognition methods. In: Second ACM/IEEE international conference on distributed smart cameras. IEEE, pp 1–10

  28. Raptis M, Sigal L (2013) Poselet key-framing: a model for human activity recognition. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 2650–2657

  29. Sadek S, Al-Hamadi A, Gerald K, Michaelis B (2013) Affine-invariant feature extraction for activity recognition. ISRN Mach Vis, 2013

  30. Schapire RE, Freund Y, Bartlett P, Lee WS (1998) Boosting the margin: a new explanation for the effectiveness of voting methods. Annals Stat 26(5):1651–1686

    MathSciNet  MATH  Google Scholar 

  31. Schuldt C, Laptev I, Caputo B (2004) Recognizing human actions: a local SVM approach. In: Proceedings of the 17th international conference on pattern recognition (ICPR). IEEE, pp 32–36

  32. Shah H, Chokalingam P, Paluri B, Pradeep N, Raman B (2007) Automated stroke classification in tennis. In: International conference image analysis and recognition, pp 1128–1137

  33. Uddin MZ, Lee JJ, Kim TS (2010) Independent shape component-based human activity recognition via hidden Markov model. Appl Intell 33(2):193–206

    Google Scholar 

  34. Vats E, Chan CS (2016) Early detection of human actions—a hybrid approach. Appl Soft Comput 46:953–966

    Google Scholar 

  35. Wang H, Schmid C (2013) Action recognition with improved trajectories. In: Proceedings of the IEEE international conference on computer vision, pp 3551–3558

  36. Wang H, Kläser A, Schmid C, Lin-Cheng L (2011) Action recognition by dense trajectories. In: CVPR 2011-IEEE conference on computer vision & pattern recognition, pp 3169–3176

  37. Weng Z, Guan Y (2018) Action recognition using length-variable edge trajectory and spatio-temporal motion skeleton descriptor. EURASIP J Image Video Process 2018 (1):8

    Google Scholar 

  38. Xu K, Jiang X, Sun T (2015) Human activity recognition based on pose points selection. In: 2015 IEEE International conference on image processing (ICIP), pp 2930–2834

  39. Zhu C, Sheng W (2011) Wearable sensor-based hand gesture and daily activity recognition for robot-assisted living. IEEE Trans Syst Man Cybern-Part A: Syst Humans 41(3):569–573

    MathSciNet  Google Scholar 

  40. Zhu W, Lan C, Xing J, Zeng W, Li Y, Shen L, Xie X (2016) Co-occurrence feature learning for skeleton based action recognition using regularized deep LSTM networks. In: AAAI Conference on artificial intelligence, p 8

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Neelam Dwivedi.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Details of Algorithms

Appendix: Details of Algorithms

figure h
figure i
figure j
figure k

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dwivedi, N., Singh, D.K. & Kushwaha, D.S. Orientation Invariant Skeleton Feature (OISF): a new feature for Human Activity Recognition. Multimed Tools Appl 79, 21037–21072 (2020). https://doi.org/10.1007/s11042-020-08902-w

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-020-08902-w

Keywords

Navigation