Abstract
Human Activity Recognition (HAR) is an arena that looks at uncooked time-series signals from embedded sensors in smartphones and wearable devices to figure out what people are doing. It has become very popular in a lot of smart home environments, especially to keep an eye on people’s behavior in ambient assisted living to help the elderly and get them back on their feet. The system goes through a series of steps to get data, remove noise and distortions, find features, choose features, and classify them. Recently, a number of state-of-the-art techniques have proposed ways to extract and choose features from data. These techniques have been classified using traditional Machine learning. Although many techniques use simple feature extraction processes, they are unable to identify complicated actions. Many HAR systems now use Deep Learning algorithms to swiftly discover and categorize characteristics as high-performance computers become more common. In this study, video shortening is utilized to discover key frames and select relevant sequences in two datasets. The selected frames are resized using adaptive frame cropping, where this image is given as an input to the proposed Convolutional Neural Network (CNN). The experiments are conducted on various activities in terms of classification rate. From the results, it is proved that CNN achieved 98.22% of classification rate on Weizmann action dataset.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Ramanujam E, Perumal T, Padmavathi S (2021) Human activity recognition with smartphone and wearable sensors using deep learning techniques: a review. IEEE Sens J
Mekruksavanich S, Jitpattanakul A (2021) Biometric user identification based on human activity recognition using wearable sensors: an experiment using deep learning models. Electronics 10(3):308
Siddiqi MH, Khan AM, Lee SW (2013) Active contours level set based still human body segmentation from depth images for video-based activity recognition. KSII Trans Internet Inf Syst 7(11):2839–2852
Elmezain M, Al-Hamadi A (2018) Vision-based human activity recognition using LDCRFs. Int Arab J Inf Technol 15(3):1683–3198
Kamimura R (2011) Structural enhanced information and its application to improved visualization of self-organizing maps. Appl Intell 34(1):102–115
Song X, Lan C, Zeng W, Xing J, Sun X, Yang J (2019) Temporal–spatial mapping for action recognition. IEEE Trans Circ Syst Video Technol 30:748–759
Hajihashemi V, Pakizeh E (2016) Human activity recognition in videos based on a two levels K-means and Hierarchical Codebooks. Int J Mechatron Electr Comput Technol 6:3152–3159
Deshpnande A, Warhade KK (2021) An improved model for human activity recognition by integrated feature approach and optimized SVM. In: Proceedings of the 2021 international conference on emerging smart computing and informatics (ESCI). IEEE, Pune, India, pp 571–576
Zayed A, Rivaz H (2020) Fast strain estimation and frame selection in ultrasound elastography using machine learning. IEEE Trans Ultrason Ferroelectr Freq Control 68:406–415
Lin X, Wang F, Guo L, Zhang W (2019) An automatic key-frame selection method for monocular visual odometry of ground vehicle. IEEE Access 7:70742–70754
Chen Y, Huang T, Niu Y, Ke X, Lin Y (2019) Pose-guided spatial alignment and key frame selection for one-shot video-based person re-identification. IEEE Access 7:78991–79004
Jeyabharathi D, Dejey (2018) Cut set-based dynamic key frame selection and adaptive layer-based background modeling for background subtraction. J Vis Commun Image Represent 55:434–446
Wang H, Yuan C, Shen J, Yang W, Ling H (2018) Action unit detection and key frame selection for human activity prediction. Neurocomputing 318:109–119
Zhou T, Li J, Wang S, Tao R, Shen J (2020) Matnet: motion-attentive transition network for zero-shot video object segmentation. IEEE Trans Image Process 29:8326–8338
Jagtap AD, Kawaguchi K, Em Karniadakis G (2020) Locally adaptive activation functions with slope recovery for deep and physics-informed neural networks. Proc R Soc A 476:20200334
Jagtap AD, Kawaguchi K, Karniadakis GE (2020) Adaptive activation functions accelerate convergence in deep and physics informed neural networks. J Comput Phys 404:109136
Targ S, Almeida D, Lyman K (2016) Resnet in resnet: generalizing residual architectures. arXiv preprint arXiv:1603.08029
Carreira J, Zisserman A (2017) Quo vadis, action recognition? A new model and the kinetics dataset. In: Proceedings of the IEEE conference on computer vision and pattern recognition, Honolulu, HI, USA, pp 6299–6308
Zheng Z, An G, Ruan Q (2020) Motion guided feature-augmented network for action recognition. In: Proceedings of the 2020 15th IEEE international conference on signal processing (ICSP), Beijing, China, vol 1, pp 391–394
Chen E, Bai X, Gao L, Tinega HC, Ding Y (2019) A spatiotemporal heterogeneous two-stream network for action recognition. IEEE Access 7:57267–57275
Yudistira N, Kurita T (2020) Correlation net: Spatiotemporal multimodal deep learning for action recognition. Signal Process Image Commun 82:115731
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Gokula Krishnan, V., Kishore Kumar, A., Bhagya Sri, G., Mohana Prakash, T.A., Abdul Saleem, P.A., Divya, V. (2023). A Deep Learning Based Human Activity Recognition System for Monitoring the Elderly People. In: Shukla, P.K., Singh, K.P., Tripathi, A.K., Engelbrecht, A. (eds) Computer Vision and Robotics. Algorithms for Intelligent Systems. Springer, Singapore. https://doi.org/10.1007/978-981-19-7892-0_11
Download citation
DOI: https://doi.org/10.1007/978-981-19-7892-0_11
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-7891-3
Online ISBN: 978-981-19-7892-0
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)