Skip to main content
Log in

Validation of human activity recognition using a convolutional neural network on accelerometer and gyroscope data

Validierung der Erkennung menschlicher Aktivitäten mittels eines neuronalen Faltungsnetzes anhand von Beschleunigungs- und Gyroskopdaten

  • Brief Communication
  • Published:
German Journal of Exercise and Sport Research Aims and scope Submit manuscript

Abstract

Background

Human activity recognition (HAR) means identifying sequences of data recorded by specialized wearable sensors into known, well-defined classes of physical activity. In principle, activity recognition provides great societal benefits, especially in real-life, humancentric applications such as healthcare and care of the elderly. Using raw acceleration and angular velocity to train a convolutional neural network shows great success in recognition accuracy. This article presents the quality of activity recognition obtained using convolutional neural network on acceleration and angular velocity data recorded from different sensor locations.

Methods

Thirty-five volunteers from two studies (16 women and 19 men) with an average age of 28.54 years wore Move4/EcgMove4 accelerometers on 6 different body positions (ankle, thigh, hip, wrist, upper arm, chest) while completing typical activities (sitting, standing, lying, walking, jogging, cycling). We then used those databases to evaluate a two-dimensional convolutional neural network (2D-CNN) that takes 3D acceleration and 3D angular velocity signals as inputs to recognize human activity. We measure the networks performance using accuracy and Cohen’s κ.

Results

Depending on the location of the sensor, the accuracy of the network varies from 96.57% (ankle) to 99.28% (thigh) and Cohen’s κ varies from 0.96 (ankle) to 0.99 (thigh).

Conclusions

The performance of the 2D-CNN concerning human activity recognition showed excellent results. Using raw signals may enable real-time, on-device—also known as at the edge—activity recognition even in small devices with low computational power and small storage.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1

References

  • Avilés-Cruz, C., Ferreyra-Ramírez, A., Zúñiga-López, A., & Villegas-Cortéz, J. (2019). Coarse-fine convolutional deep-learning strategy for human activity recognition. Sensors, 19(7), 1556.

    Article  Google Scholar 

  • Buchner, D. M. (2014). The development and content of the 2008 physical activity guidelines for Americans. Journal of physical education, recreation and dance, 85(7), 13–16.

    Article  Google Scholar 

  • Cho, H., & Yoon, S. M. (2018). Divide and conquer-based 1D CNN human activity recognition using test data sharpening. Sensors, 18(4), 1055.

    Article  Google Scholar 

  • Gadri, S., & Neuhold, E. (2020). Building best predictive models using ML and DL approaches to categorize fashion clothes. In International conference on artificial intelligence and soft computing (pp. 90–102). Cham: Springer.

    Chapter  Google Scholar 

  • Giurgiu, M., Bussmann, J. B., Hill, H., Anedda, B., Kronenwett, M., Koch, E. D., & Reichert, M. (2020). Validating accelerometers for the assessment of body position and sedentary behavior. Journal for the Measurement of Physical Behaviour, 3(3), 253–263.

    Article  Google Scholar 

  • Hamm, J., Stone, B., Belkin, M., & Dennis, S. (2012). Automatic annotation of daily activity from smartphone-based multisensory streams. In International conference on mobile computing, applications, and services (pp. 328–342). Berlin, Heidelberg: Springer.

    Google Scholar 

  • Ji, X., Cheng, J., Feng, W., & Tao, D. (2018). Skeleton embedded motion body partition for human action recognition using depth sequences. Signal Processing, 143, 56–68.

    Article  Google Scholar 

  • Jiang, W., & Yin, Z. (2015). Human activity recognition using wearable sensors by deep convolutional neural networks. In Proceedings of the 23rd ACM international conference on Multimedia (pp. 1307–1310).

    Chapter  Google Scholar 

  • O’Shea, K., & Nash, R. (2015). An introduction to convolutional neural networks. arXiv preprint arXiv:1511.08458.

    Google Scholar 

  • Qi, X., Keally, M., Zhou, G., Li, Y., & Ren, Z. (2013). AdaSense: adapting sampling rates for activity recognition in body sensor networks. In 2013 IEEE 19th Real-Time and Embedded Technology and Applications Symposium (RTAS) (pp. 163–172). IEEE.

    Google Scholar 

  • Rana, J. B., Shetty, R., & Jha, T. (2015). Application of machine learning techniques in human activity recognition. arXiv preprint arXiv:1510.05577.

    Google Scholar 

  • Ronao, C. A., & Cho, S. B. (2016). Human activity recognition with smartphone sensors using deep learning neural networks. Expert systems with applications, 59, 235–244.

    Article  Google Scholar 

  • Shakya, S. R., Zhang, C., & Zhou, Z. (2018). Comparative study of machine learning and deep learning architecture for human activity recognition using accelerometer data. Int. J. Mach. Learn. Comput, 8(6), 577–582.

    Google Scholar 

  • Staudenmayer, J., Zhu, W., & Catellier, D. J. (2012). Statistical considerations in the analysis of accelerometry-based activity monitor data. Med Sci Sports Exerc., 44(Suppl 1), 61–67. https://doi.org/10.1249/MSS.0b013e3182399e0f.

    Article  Google Scholar 

  • Warrens, M. J. (2015). Five ways to look at Cohen’s kappa. Journal of Psychology & Psychotherapy, 5(4), 1.

    Article  Google Scholar 

  • Zhang, C., Yang, X., Lin, W., & Zhu, J. (2012). Recognizing human group behaviors with multi-group causalities. In 2012 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology (Vol. 3, pp. 44–48). IEEE: In.

    Chapter  Google Scholar 

  • Zhou, Z., Li, K., & He, X. (2015). Recognizing human activity in still images by integrating group-based contextual cues. In Proceedings of the 23rd ACM international conference on Multimedia (pp. 1135–1138).

    Chapter  Google Scholar 

Download references

Acknowledgements

We thank Marco Giurgiu, KIT Karlsruhe and his team for providing us with the data from the study (Giurgiu et al., 2020).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eni Hysenllari.

Ethics declarations

Conflict of interest

E. Hysenllari, J. Ottenbacher and D. McLennan disclosed that they are employed by movisens GmbH, which sells the accelerometer devices that are mentioned in the article and analysis software.

For this article no studies with human participants or animals were performed by any of the authors. All studies mentioned were in accordance with the ethical standards indicated in each case.

Supplementary Information

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hysenllari, E., Ottenbacher, J. & McLennan, D. Validation of human activity recognition using a convolutional neural network on accelerometer and gyroscope data. Ger J Exerc Sport Res 52, 248–252 (2022). https://doi.org/10.1007/s12662-022-00817-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12662-022-00817-y

Keywords

Navigation