Skip to main content

A Deep Learning Based Human Activity Recognition System for Monitoring the Elderly People

  • Conference paper
  • First Online:
Computer Vision and Robotics

Abstract

Human Activity Recognition (HAR) is an arena that looks at uncooked time-series signals from embedded sensors in smartphones and wearable devices to figure out what people are doing. It has become very popular in a lot of smart home environments, especially to keep an eye on people’s behavior in ambient assisted living to help the elderly and get them back on their feet. The system goes through a series of steps to get data, remove noise and distortions, find features, choose features, and classify them. Recently, a number of state-of-the-art techniques have proposed ways to extract and choose features from data. These techniques have been classified using traditional Machine learning. Although many techniques use simple feature extraction processes, they are unable to identify complicated actions. Many HAR systems now use Deep Learning algorithms to swiftly discover and categorize characteristics as high-performance computers become more common. In this study, video shortening is utilized to discover key frames and select relevant sequences in two datasets. The selected frames are resized using adaptive frame cropping, where this image is given as an input to the proposed Convolutional Neural Network (CNN). The experiments are conducted on various activities in terms of classification rate. From the results, it is proved that CNN achieved 98.22% of classification rate on Weizmann action dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 219.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 279.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 279.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Ramanujam E, Perumal T, Padmavathi S (2021) Human activity recognition with smartphone and wearable sensors using deep learning techniques: a review. IEEE Sens J

    Google Scholar 

  2. Mekruksavanich S, Jitpattanakul A (2021) Biometric user identification based on human activity recognition using wearable sensors: an experiment using deep learning models. Electronics 10(3):308

    Article  Google Scholar 

  3. Siddiqi MH, Khan AM, Lee SW (2013) Active contours level set based still human body segmentation from depth images for video-based activity recognition. KSII Trans Internet Inf Syst 7(11):2839–2852

    Article  Google Scholar 

  4. Elmezain M, Al-Hamadi A (2018) Vision-based human activity recognition using LDCRFs. Int Arab J Inf Technol 15(3):1683–3198

    Google Scholar 

  5. Kamimura R (2011) Structural enhanced information and its application to improved visualization of self-organizing maps. Appl Intell 34(1):102–115

    Article  Google Scholar 

  6. Song X, Lan C, Zeng W, Xing J, Sun X, Yang J (2019) Temporal–spatial mapping for action recognition. IEEE Trans Circ Syst Video Technol 30:748–759

    Article  Google Scholar 

  7. Hajihashemi V, Pakizeh E (2016) Human activity recognition in videos based on a two levels K-means and Hierarchical Codebooks. Int J Mechatron Electr Comput Technol 6:3152–3159

    Google Scholar 

  8. Deshpnande A, Warhade KK (2021) An improved model for human activity recognition by integrated feature approach and optimized SVM. In: Proceedings of the 2021 international conference on emerging smart computing and informatics (ESCI). IEEE, Pune, India, pp 571–576

    Google Scholar 

  9. Zayed A, Rivaz H (2020) Fast strain estimation and frame selection in ultrasound elastography using machine learning. IEEE Trans Ultrason Ferroelectr Freq Control 68:406–415

    Article  Google Scholar 

  10. Lin X, Wang F, Guo L, Zhang W (2019) An automatic key-frame selection method for monocular visual odometry of ground vehicle. IEEE Access 7:70742–70754

    Article  Google Scholar 

  11. Chen Y, Huang T, Niu Y, Ke X, Lin Y (2019) Pose-guided spatial alignment and key frame selection for one-shot video-based person re-identification. IEEE Access 7:78991–79004

    Article  Google Scholar 

  12. Jeyabharathi D, Dejey (2018) Cut set-based dynamic key frame selection and adaptive layer-based background modeling for background subtraction. J Vis Commun Image Represent 55:434–446

    Google Scholar 

  13. Wang H, Yuan C, Shen J, Yang W, Ling H (2018) Action unit detection and key frame selection for human activity prediction. Neurocomputing 318:109–119

    Article  Google Scholar 

  14. Zhou T, Li J, Wang S, Tao R, Shen J (2020) Matnet: motion-attentive transition network for zero-shot video object segmentation. IEEE Trans Image Process 29:8326–8338

    Article  MATH  Google Scholar 

  15. Jagtap AD, Kawaguchi K, Em Karniadakis G (2020) Locally adaptive activation functions with slope recovery for deep and physics-informed neural networks. Proc R Soc A 476:20200334

    Google Scholar 

  16. Jagtap AD, Kawaguchi K, Karniadakis GE (2020) Adaptive activation functions accelerate convergence in deep and physics informed neural networks. J Comput Phys 404:109136

    Google Scholar 

  17. Targ S, Almeida D, Lyman K (2016) Resnet in resnet: generalizing residual architectures. arXiv preprint arXiv:1603.08029

  18. Carreira J, Zisserman A (2017) Quo vadis, action recognition? A new model and the kinetics dataset. In: Proceedings of the IEEE conference on computer vision and pattern recognition, Honolulu, HI, USA, pp 6299–6308

    Google Scholar 

  19. Zheng Z, An G, Ruan Q (2020) Motion guided feature-augmented network for action recognition. In: Proceedings of the 2020 15th IEEE international conference on signal processing (ICSP), Beijing, China, vol 1, pp 391–394

    Google Scholar 

  20. Chen E, Bai X, Gao L, Tinega HC, Ding Y (2019) A spatiotemporal heterogeneous two-stream network for action recognition. IEEE Access 7:57267–57275

    Article  Google Scholar 

  21. Yudistira N, Kurita T (2020) Correlation net: Spatiotemporal multimodal deep learning for action recognition. Signal Process Image Commun 82:115731

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to V. Gokula Krishnan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gokula Krishnan, V., Kishore Kumar, A., Bhagya Sri, G., Mohana Prakash, T.A., Abdul Saleem, P.A., Divya, V. (2023). A Deep Learning Based Human Activity Recognition System for Monitoring the Elderly People. In: Shukla, P.K., Singh, K.P., Tripathi, A.K., Engelbrecht, A. (eds) Computer Vision and Robotics. Algorithms for Intelligent Systems. Springer, Singapore. https://doi.org/10.1007/978-981-19-7892-0_11

Download citation

Publish with us

Policies and ethics