Skip to main content
Log in

Few-shot transfer learning for wearable IMU-based human activity recognition

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Deep learning has proven to be highly effective for human activity recognition (HAR) when large amount of labelled data is available for the target task. However, training a deep learning model to generalize well on a new task with just-few observations of labelled data is an active area of research. In this paper, a novel few-shot transfer learning (FSTL) approach is proposed for classification of human activities using just few instances (shots) of the data obtained from a wearable system assembled to collect inertial sensor data for different human activities, performed by two users. First, a deep learning model is trained on a large publicly available HAR dataset. The model parameters of such a model are then fine-tuned using the Reptile algorithm to determine the optimal initial parameter set using which, the model will classify activities with just few-shots of data from the target task. The proposed FSTL approach yields an average classification accuracy of 74.86 ± 0.71% and 79.20 ± 1.05% for 3-way, 5-shot classification of new activities performed by a single user and same set of activities performed by a new user, respectively. When the pre-trained weights are used as the initial weights in the Reptile algorithm, the generalization ability of the model improves by about 10% for 3-way, 5-shot classification as compared to using few-shot learning without parameter transfer.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Data availability

The data that support the findings of this study are available to collaborating researchers upon request from the corresponding author.

Abbreviations

\(U\) :

Publicly available dataset

\(({x}_{i},{y}_{i})\) :

\({x}_{i}\) Represents sensor data and \({y}_{i}\) represents the activity label

\({U}_{{\text{train}}}\) :

Training Dataset generated from \(U\)

\({U}_{{\text{val}}}\) :

Validation Dataset generated from \(U\)

\({U}_{{\text{test}}}\) :

Testing Dataset generated from \(U\)

P :

Number of Deep Learning Models trained and tested on \(U\)

\({M}_{p}\) :

pTh deep learning model

\(acc\) :

Accuracy of a deep learning model

\({acc}_{p}\) :

Accuracy of the pth deep learning model

\(\widetilde{M}\) :

The best performing model that yields highest accuracy

\(\widetilde{\theta }\) :

Parameters of best performing model \(\widetilde{M}\)

\({\theta }_{0}\) :

Initial parameters of deep learning model used as base model in Reptile algorithm

\(D\) :

Self-recorded target dataset

\({D}_{{\text{source}}}\) :

Source data generated from the self-recorded target dataset

\({D}_{{\text{target}}}\) :

Target generated from the self-recorded target dataset

\(epochs\) :

Number of epochs of Reptile algorithm

\(iter\) :

Number of iterations of Meta-training in Reptile algorithm

\({S}_{s}\) :

The support set from the source dataset, \({D}_{{\text{source}}}\)

\({Q}_{s}\) :

The query sets from the source dataset, \({D}_{{\text{source}}}\)

\({\theta }_{s}\) :

Model parameters after optimizing on \({S}_{s}\)

\(L( )\) :

Loss function

\({\theta }^{*}\) :

Model parameters after optimizing in outer loop of Reptile algorithm

\(U\left({\text{data}};\;{\text{initial}}\_{\text{paramters}}\right)\) :

Model parameters after optimizing on given \(data\), starting from initial set of parameters ‘\({\text{initial}}\_{\text{paramters}}\)

\(\alpha\) :

Step size during weight updation

\({S}_{t}\) :

The support set from the target dataset

\({Q}_{t}\) :

The query sets from the target dataset

References

  1. Chu WCC, Shih C, Chou WY, Ahamed SI, Hsiung PA (2019) Artificial intelligence of things in sports science: weight training as an example. Computer 52(11):52–61. https://doi.org/10.1109/MC.2019.2933772

    Article  Google Scholar 

  2. ChenK, Zheng W, Lin Y, Tang S, Chou L, and Lai Y, Deep-learning-based human motion tracking for rehabilitation applications using 3d image feature. In: 2020 42nd annual international conference of the IEEE engineering in medicine & biology society (EMBC), pp 20–24, 2020, doi: https://doi.org/10.1109/EMBC44109.2020.9176120

  3. Nweke HF, Teh YW, Al-garadi MA, Alo UR (2018) Deep learning algorithms for human activity recognition using mobile and wearable sensor networks: state of the art and research challenges. Expert Syst Appl 105:233–261. https://doi.org/10.1016/j.eswa.2018.03.056

    Article  Google Scholar 

  4. Greco L, Percannella G, Ritrovato P, Tortorella F, Vento M (2020) Trends in IoT based solutions for health care: moving AI to the edge. Pattern Recognit Lett 135:346–353. https://doi.org/10.1016/j.patrec.2020.05.016

    Article  Google Scholar 

  5. Challa SK, Kumar A, Semwal VB (2021) A multibranch CNN-BiLSTM model for human activity recognition using wearable sensor data. Vis Comput. https://doi.org/10.1007/s00371-021-02283-3

    Article  Google Scholar 

  6. Saleem G, Bajwa UI, Raza RH (2023) Toward human activity recognition: a survey. Neural Comput Appl 35:4145–4182. https://doi.org/10.1007/s00521-022-07937-4

    Article  Google Scholar 

  7. Qiu S et al (2022) Multi-sensor information fusion based on machine learning for real applications in human activity recognition: state-of-the-art and research challenges. Inform Fusion 80:241–265. https://doi.org/10.1016/j.inffus.2021.11.006

    Article  Google Scholar 

  8. Minh Dang L, Min K, Wang H, Jalil Piran M, Hee Lee C, Moon H (2020) Sensor-based and vision-based human activity recognition: a comprehensive survey. Pattern Recognit 108:107561. https://doi.org/10.1016/j.patcog.2020.107561

    Article  Google Scholar 

  9. Stavropoulos TG, Papastergiou A, Mpaltadoros L, Nikolopoulos S, Kompatsiaris I (2020) IoT wearable sensors and devices in elderly care: a literature review. Sensors (Switzerland) 20(10):2826. https://doi.org/10.3390/s20102826

    Article  Google Scholar 

  10. Zhang J, Tao D (2021) Empowering things with intelligence: a survey of the progress, challenges, and opportunities in artificial intelligence of things. IEEE Int Things J 8(10):7789–7817

    Article  Google Scholar 

  11. Beddiar DR, Nini B, Sabokrou M, Hadid A (2020) Vision-based human activity recognition: a survey. Multimed Tools Appl 79(41–42):30509–30555. https://doi.org/10.1007/s11042-020-09004-3

    Article  Google Scholar 

  12. GuptaH, Anil A, and Gupta R (2018) On the combined use of electromyogram and accelerometer in lower limb motion recognition. In: proceedings 8th international advance computer conferences IACC 2018, pp. 240–245, , doi: https://doi.org/10.1109/IADCC.2018.8692090.

  13. Alvarez-Alvarez A, Alonso JM (2013) Human activity recognition in indoor environments by means of fusing information extracted from intensity of wifi signal and accelerations. Inform Sci 233:162–182. https://doi.org/10.1016/j.ins.2013.01.029

    Article  Google Scholar 

  14. Ordóñez FJ, Roggen D (2016) Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition. Sensors (Switzerland) 16(1):115. https://doi.org/10.3390/s16010115

    Article  Google Scholar 

  15. Miguel I, Hussain F, Marques G, Garcia NM (2021) Comparison of machine learning techniques for the identification of human activities from inertial sensors available in a mobile device after the application of data imputation techniques. Comput. Biol. Med. 135:104638. https://doi.org/10.1016/j.compbiomed.2021.104638

    Article  Google Scholar 

  16. Jamil H, Qayyum F, Iqbal N, Jamil F, Kim DH (2023) Optimal ensemble scheme for human activity recognition and floor detection based on automl and weighted soft voting using smartphone sensors. IEEE Sens J 23(3):2878–2890. https://doi.org/10.1109/JSEN.2022.3228120

    Article  Google Scholar 

  17. Ganesha HS, Gupta R, Gupta SH (2021) Multi-layer heterogeneous ensemble model for human activity recognition. Conf Signal Process Commun ICSC 2021:210–215. https://doi.org/10.1109/ICSC53193.2021.9673341

    Article  Google Scholar 

  18. AnguitaD, Ghio A, Oneto L, Parra X, and Reyes-ortiz JL (2013) A public domain dataset for human activity recognition using smartphones. 21st European symposium on artificial neural networks, computational intelligence and machine learning Bruges, April, pp. 24–26. i6doc.com publ., ISBN 978–2–87419–081–0. Available from http://www.i6doc.com/en/livre/?GCOI=28001100131010.

  19. Gonzalez S, Stegall P, Edwards H, Stirling L, Siu HC (2021) Ablation analysis to select wearable sensors for classifying standing, walking, and running. Sensors 21:194

    Article  Google Scholar 

  20. Sikder N, Nahid A (2021) KU-HAR: an open dataset for heterogeneous human activity recognition. Pattern Recognit Lett 146:46–54. https://doi.org/10.1016/j.patrec.2021.02.024

    Article  Google Scholar 

  21. Hospedales T, Antoniou A, Micaelli P, Storkey A (2022) Meta-learning in neural networks: a survey. IEEE Trans Pattern Anal Mach Intell 44(9):5149–5169. https://doi.org/10.1109/TPAMI.2021.3079209

    Article  Google Scholar 

  22. NicholA, Achiam J, and Schulman J (2018) On first-order meta-learning algorithms. pp. 1–15, [Online]. Available: http://arxiv.org/abs/1803.02999

  23. Rivera P, Valarezo E, Choi M-T, Kim T-S (2017) Recognition of human hand activities based on a single wrist IMU using recurrent neural networks. Int J Pharma Med Biol Sci 6(4):114–118. https://doi.org/10.18178/ijpmbs.6.4.114-118

    Article  Google Scholar 

  24. Zhang Z et al (2020) Deep learning-enabled triboelectric smart socks for IOT-Based gait analysis and VR applications. npj Flex Electron 4(1):29. https://doi.org/10.1038/s41528-020-00092-7

    Article  Google Scholar 

  25. Koo I, Park Y, Jeong M, Kim C (2023) Contrastive accelerometer-gyroscope embedding model for human activity recognition. IEEE Sens J 23(1):506–513. https://doi.org/10.1109/JSEN.2022.3222825

    Article  Google Scholar 

  26. Cho H, Yoon SM (2018) Divide and conquer-based 1D CNN human activity recognition using test data sharpening. Sensors (Switzerland) 18(4):1–24. https://doi.org/10.3390/s18041055

    Article  Google Scholar 

  27. SikderN, Chowdhury MS, Arif ASM, and Al Nahid A (2019) Human activity recognition using multichannel convolutional neural network. In: 2019 5th international conference on advanced electrical engineering ICAEE 2019, pp. 560–565, doi: https://doi.org/10.1109/ICAEE48663.2019.8975649.

  28. Sarkar A, Hossain SKS, Sarkar R (2023) Human activity recognition from sensor data using spatial attention-aided CNN with genetic algorithm. Neural Comput Appl 35:5165–5191. https://doi.org/10.1007/s00521-022-07911-0

    Article  Google Scholar 

  29. Ankita S, Rani H, Babbar S, Coleman AS, Aljahdali HM (2021) An efficient and lightweight deep learning model for human activity recognition using smartphones. Sensors 21(11):3845. https://doi.org/10.3390/s21113845

    Article  Google Scholar 

  30. Huang H, Zhou P, Li Y, Sun F (2021) A lightweight attention-based CNN model for efficient gait recognition with wearable IMU sensors. Sensors 21(8):2866. https://doi.org/10.3390/s21082866

    Article  Google Scholar 

  31. Dua N, Singh SN, Semwal VB (2021) Multi-input CNN-GRU based human activity recognition using wearable sensors. Computing 103(7):1461–1478. https://doi.org/10.1007/s00607-021-00928-8

    Article  Google Scholar 

  32. Chakraborty S, Mondal R, Singh PK, Sarkar R, Bhattacharjee D (2021) Transfer learning with fine tuning for human action recognition from still images. Multimed Tools Appl 80(13):20547–20578. https://doi.org/10.1007/s11042-021-10753-y

    Article  Google Scholar 

  33. Tan C, Sun F, Kong T, Zhang W, Yang C, Liu C (2018) A survey on deep transfer learning. Lect Notes Comput Sci 11141:270–279. https://doi.org/10.1007/978-3-030-01424-7_27

    Article  Google Scholar 

  34. Iglesias G, Talavera E, González-Prieto Á, Mozo A, Gómez-Canaval S (2023) Data augmentation techniques in time series domain: a survey and taxonomy. Neural Comput Appl 35:1–23. https://doi.org/10.1007/s00521-023-08459-3

    Article  Google Scholar 

  35. Finn C, Abbeel P, Levine S (2017) Model-agnostic meta-learning for fast adaptation of deep networks. Conf Mach Learn ICML 2017(3):1856–1868

    Google Scholar 

  36. Vinyals O, Blundell C, Lillicrap T, Kavukcuoglu K (2016) Matching networks for one shot learning. Proc Int Conf Neural Inf Process Syst 29:3630–3638

    Google Scholar 

  37. MahajanK, Sharma M and Vig L (2020) Meta-derm diagnosis: few-shot skin disease identification using meta-learning, In: 2020 IEEE/CVF conference on computer vision and pattern recognition workshops (CVPRW), Seattle, WA, USA, pp. 3142–3151, doi: https://doi.org/10.1109/CVPRW50498.2020.00373.

  38. Frikha A, Krompaß D, Tresp V (2021) Few-shot one-class classification via meta-learning. Proc Conf Artif Intell 35(8):7448–7456

    Google Scholar 

  39. Deng S, Hua W, Wang B, Wang G, Zhou X (2020) Few-shot human activity recognition on noisy wearable sensor data. Lect Notes Comput Sci 12113:54–72. https://doi.org/10.1007/978-3-030-59416-9_4

    Article  Google Scholar 

  40. Nie L, Li X, Gong T, Zhan D (2022) Few shot learning-based fast adaptation for human activity recognition. Patt Recognit Lett 159:100–107. https://doi.org/10.1016/j.patrec.2022.04.014

    Article  Google Scholar 

  41. Feng S, Duarte MF (2019) Few-shot learning-based human activity recognition. Expert Syst Appl 138:112782. https://doi.org/10.1016/j.eswa.2019.06.070

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rinki Gupta.

Ethics declarations

Conflict of interest

The authors have no conflict of interest to declare.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ganesha, H.S., Gupta, R., Gupta, S.H. et al. Few-shot transfer learning for wearable IMU-based human activity recognition. Neural Comput & Applic 36, 10811–10823 (2024). https://doi.org/10.1007/s00521-024-09645-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-024-09645-7

Keywords

Navigation