Abstract
Identification of human activity with decent precision is a challenging task in the field of computer vision, especially when applying for surveillance purpose. A rule-based classifier method is proposed in this paper, which is capable of recognizing a view-invariant multiple human activity recognition in real time. A single Kinect sensor is used for the input of RGB-D data in real time. Initially, a skeleton-tracking algorithm is applied. After tracking the skeletons, activities are recognized from each individually tracked skeleton independently. Different rules are defined to recognize discrete skeleton positions and classify a particular order of multiple postures into activities. During the experimentation, we examine about 14 activities and found that the proposed method is robust and efficient concerning multiple views, scaling and phase variation activities during different realistic acts. A self-generated dataset in the controlled environment is used for the experiment. About 2 min of data was collected. Data from two different males were collected for multiple human activities. Experimental results show that the proposed method is flexible and efficient for multiple view activities as well as scale and phase variation activities. It provides a detection accuracy of 98%.
This is a preview of subscription content, access via your institution.


















References
Akdoğan E, Taçgın E, Adli MA (2009) Knee rehabilitation using an intelligent robotic system. J Intell Manuf 20(2):195
Bakalos N, Rallis I, Doulamis N, Doulamis A, Protopapadakis E, Voulodimos A (2019) Choreographic pose identification using convolutional neural networks. In: 2019 11th International conference on virtual worlds and games for serious applications (VS-Games). IEEE, pp 1–7
Bedregal BR, Dimuro GP (2006) Interval fuzzy rule-based hand gesture recognition. In: 12th GAMM-IMACS international symposium on scientific computing, computer arithmetic and validated numerics (SCAN 2006). IEEE, p 12
Bo AP, Hayashibe M, Poignet P (2011) Joint angle estimation in rehabilitation with inertial sensors and its integration with Kinect. In: 2011 Annual international conference of the IEEE engineering in medicine and biology society. IEEE, pp 3479–3483
Chen L, Wei H, Ferryman J (2013) A survey of human motion analysis using depth imagery. Pattern Recogn Lett 34(15):1995–2006
Clark RA, Pua YH, Fortin K, Ritchie C, Webster KE, Denehy L, Bryant AL (2012) Validity of the Microsoft Kinect for assessment of postural control. Gait Posture 36(3):372–377
Clark RA, Pua YH, Bryant AL, Hunt MA (2013) Validity of the Microsoft Kinect for providing lateral trunk lean feedback during gait retraining. Gait Posture 38(4):1064–1066
Cottone P, Maida G, Morana M (2014) User activity recognition via kinect in an ambient intelligence scenario. IERI Procedia 7:49–54
Dai X (2013) Vision-based 3D human motion analysis for fall detection and bed-exiting. Master thesis, Faculty of the Daniel Felix Ritchie School of Engineering and Computer Science, University of Denver, USA, August 2013
Davidson, A. Kinect. http://fivedots.coe.psu.ac.th/~ad/jg/nui15/kinect.jpg. Accessed 19 June 2014
Gasparrini S, Cippitelli E, Spinsante S, Gambi E (2014) A depth-based fall detection system using a Kinect® sensor. Sensors 14(2):2756–2775
Hachaj T, Ogiela MR (2014) Rule-based approach to recognizing human body poses and gestures in real time. Multimed Syst 20(1):81–99
Haritaoglu I, Harwood D, Davis LS (2000) W/sup 4: real-time surveillance of people and their activities. IEEE Trans Pattern Anal Mach Intell 22(8):809–830
Hbali Y, Hbali S, Ballihi L, Sadgal M (2017) Skeleton-based human activity recognition for elderly monitoring systems. IET Comput Vis 12(1):16–26
İnce ÖF, Ince IF, Yıldırım ME, Park JS, Song JK, Yoon BW (2019) Human activity recognition with analysis of angles between skeletal joints using a RGB-depth sensor. ETRI J 42:78–89
Khari M, Garg AK, Crespo RG, Verdú E (2019) Gesture recognition of RGB and RGB-D static images using convolutional neural networks. Int J Interact Multimed Artif Intell 5(7):22–27
Kushwaha AK, Srivastava S, Srivastava R (2017) Multi-view human activity recognition based on silhouette and uniform rotation invariant local binary patterns. Multimed Syst 23(4):451–467
Li M, Leung H, Shum HP (2016) Human action recognition via skeletal and depth based feature fusion. In: Proceedings of the 9th international conference on motion in games, pp 123–132
Ling J, Tian L, Li C (2016) 3D human activity recognition using skeletal data from RGBD sensors. In: International symposium on visual computing. Springer, Cham, pp 133–142
Liu S, Kong L, Wang H (2018) Human activities recognition based on skeleton information via sparse representation. J Comput Sci Eng 12(1):1–1
Poppe R (2010) A survey on vision-based human action recognition. Image Vis Comput 28(6):976–990
Taha A, Zayed HH, Khalifa ME, El-Horbaty ES (2015) Skeleton-based human activity recognition for video surveillance. Int J Sci Eng Res 6(1):993–1004
Wang Q, Turaga P, Coleman G, Ingalls T (2014) Somatech: an exploratory interface for altering movement habits. In: CHI'14 extended abstracts on human factors in computing systems, pp 1765–1770
Ye M, Zhang Q, Wang L, Zhu J, Yang R, Gall J (2013) A survey on human motion analysis from depth data. In: Grzegorzek M, Theobalt C, Koch R, Kolb A (eds) Time-of-flight and depth imaging, sensors, algorithms, and applications. Springer, Berlin, pp 149–187
Zhu HM, Pun CM (2013) Human action recognition with skeletal information from depth camera. In: 2013 IEEE international conference on information and automation (ICIA). IEEE, pp 1082–1085
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that there is no conflict of interest regarding the publication of this paper.
Ethical approval
This article does not contain any studies with human participants or animals performed by any of the authors.
Informed consent
Informed consent is obtained from all individual participants included in the study.
Additional information
Communicated by Suresh Chandra Satapathy.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Varshney, N., Bakariya, B., Kushwaha, A.K.S. et al. Rule-based multi-view human activity recognition system in real time using skeleton data from RGB-D sensor. Soft Comput 27, 405–421 (2023). https://doi.org/10.1007/s00500-021-05649-w
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00500-021-05649-w
Keywords
- Activity recognition
- Microsoft Kinect
- Multi-view
- Rule-based classifier
- Tracking skeleton