Skip to main content

ConvLSTM for Human Activity Recognition

  • Conference paper
  • First Online:
International Conference on Innovative Computing and Communications

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 1388))

Abstract

The research in Human Activity Recognition (HAR) using wearable probes and pocket devices has intensified to further understand and inherently foresee human behavior and their intentions. The researchers are seeking a system to consume the least amount of allocated resources to identify the consumer’s activity being performed. In this paper, we propose a state-of-the-art deep learning-based activity recognition architecture, a Convolutional Long Short-Term Memory (ConvLSTM) network . This ConvLSTM approach significantly improves the accuracy of classification of the six activities from raw data without the use of any major aspect of feature engineering hence, reducing the complexity of the model with a very minor pre-processing procedure. Our proposed model is able to achieve a staggering 94% accuracy on the UCI HAR public dataset. During performance comparisons with earlier models, we were able to notice profitable improvements against Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM) Network, Deep Neural Network (DNN) models, and also against linear and non-linear machine learning models which heavily depend upon manually manufactured featured data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. H. Ismail Fawaz, G. Forestier, J. Weber, L. Idoumghar, P.-A. Muller, Transfer learning for time series classification. IEEE Int. Conf. Big Data (2018)

    Google Scholar 

  2. R. Mutegeki, D.S. Han, ACNN-LSTM approach to human activity recognition. IEEE Int. Conf. (2020)

    Google Scholar 

  3. F.M. Rueda, R. Grzeszick, G.A. Fink, S. Feldhorst, M. Hompel, Convolutional neural networks for human activity recognition using Body-Worn sensors. Informatics 5(2), 26 (2018)

    Google Scholar 

  4. N.Y. Hammerla, S. Halloran, T. Ploetz. Deep, convolutional, and recurrent models for human activity recognition using wearables (2016), arXiv:1604.08880

  5. X. Shi, Z. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong, W.-C. Woo, Convolutional LSTM network: a machine learning approach for precipitation nowcasting

    Google Scholar 

  6. R. Mutegeki, D.S. Han, Feature-representation transfer learning for human activity recognition. The 10th International Conference on ICT Convergence

    Google Scholar 

  7. H. Ismail Fawaz, G. Forestier, J. Weber, L. Idoumghar, P.-A. Muller, Transfer learning for time series classification. IEEE Int. Conf. Big Data, 1367–1376 (2018)

    Google Scholar 

  8. A. Krizhevsky, I. Sutskever, G.E. Hinton, ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 25(2), 1097–110 (2012)

    Google Scholar 

  9. K. Kim, S. Choi, M. Chae, H. Park, J. Lee, J. Park, A deep learning based approach to recognizing accompanying status of smartphone users using multimodal data. J. Intell. Inf. Syst. 25(1), 163–177 (2019)

    Google Scholar 

  10. A. Jain, V. Kanhangad, Human activity classification in smartphones using accelerometer and gyroscope sensors. IEEE Sens. J. 18(3), 1169–1177 (2018). https://doi.org/10.1109/JSEN.2017.2782492

  11. D. Anguita, A. Ghio, L. Oneto, X. Parra, J.L. Reyes-Ortiz, A public domain dataset for human activity recognition using smartphones, in Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, Bruges, Belgium, 24–26 April (2013)

    Google Scholar 

  12. S.O. Eyobu, D.S. Han, Feature representation and data augmentation for human activity classification based on wearable IMU sensor data using a deep LSTM neural network. Sensors 18, 2892 (2018)

    Google Scholar 

  13. S. Hochreiter, J. Schmidhuber, Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  14. F. Chollet, Layer wrappers, Keras Documentation (2015), https://keras.io/layers/wrappers/#timedistributed. Accessed 1 Dec. 2019

  15. R. Gómez, Understanding categorical cross-entropy loss, binary cross-entropy loss, Softmax loss, logistic loss, focal loss and all those confusing names (2019), https://gombru.github.io/2018/05/23/cross_entropy_loss/. Accessed 1 Dec. 2019

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Singla, R., Mittal, S., Jain, A., Gupta, D. (2022). ConvLSTM for Human Activity Recognition. In: Khanna, A., Gupta, D., Bhattacharyya, S., Hassanien, A.E., Anand, S., Jaiswal, A. (eds) International Conference on Innovative Computing and Communications. Advances in Intelligent Systems and Computing, vol 1388. Springer, Singapore. https://doi.org/10.1007/978-981-16-2597-8_28

Download citation

Publish with us

Policies and ethics