Skip to main content

Advertisement

Log in

Human Activity Recognition (HAR) Using Deep Learning: Review, Methodologies, Progress and Future Research Directions

  • Review article
  • Published:
Archives of Computational Methods in Engineering Aims and scope Submit manuscript

Abstract

Human activity recognition is essential in many domains, including the medical and smart home sectors. Using deep learning, we conduct a comprehensive survey of current state and future directions in human activity recognition (HAR). Key contributions of deep learning to the advancement of HAR, including sensor and video modalities, are the focus of this review. A wide range of databases and performance metrics used in the implementation of HAR methodologies are described in depth. This paper explores the wide range of HAR’s potential uses, from healthcare, emotion calculation and assisted living to security and education. The paper provides an in-depth analysis of the most significant works that employ deep learning techniques for a variety of HAR downstream tasks across both the video and sensor domains including the most recent advances. Finally, it addresses problems and limitations in the current state of HAR research and proposes future research avenues for advancing the field.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Data Availability

NOT APPLICABLE

References

  1. Lu M, Hu Y, Lu X (2020) Driver action recognition using deformable and dilated faster R-CNN with optimized region proposals. Appl Intell 50(4):1100–1111

    Google Scholar 

  2. Lin W, Sun M-T, Poovandran R, Zhang Z (2008) Human activity recognition for video surveillance. In: 2008 IEEE international symposium on circuits and systems (ISCAS), pp 2737–2740. IEEE

  3. Rodomagoulakis I, Kardaris N, Pitsikalis V, Mavroudi E, Katsamanis A, Tsiami A, Maragos P (2016) Multimodal human action recognition in assistive human-robot interaction. In: 2016 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 2702–2706. IEEE

  4. Hu W, Xie D, Fu Z, Zeng W, Maybank S (2007) Semantic-based surveillance video retrieval. IEEE Trans Image Proces 16(4):1168–1181

    MathSciNet  Google Scholar 

  5. Shotton J, Fitzgibbon A, Cook M, Sharp T, Finocchio M, Moore R, Kipman A, Blake A (2011) Real-time human pose recognition in parts from single depth images. In: CVPR 2011, pp 1297–1304. IEEE

  6. Lara OD, Labrador MA (2012) A survey on human activity recognition using wearable sensors. IEEE Commun Surv Tutor 15(3):1192–1209

    Google Scholar 

  7. Wang J, Chen Y, Hao S, Peng X, Hu L (2019) Deep learning for sensor-based activity recognition: a survey. Pattern Recognit Lett 119:3–11

    Google Scholar 

  8. Xu T, Zhou Y, Zhu J (2018) New advances and challenges of fall detection systems: a survey. Appl Sci 8(3):418

    Google Scholar 

  9. Sathyanarayana S, Satzoda RK, Sathyanarayana S, Thambipillai S (2018) Vision-based patient monitoring: a comprehensive review of algorithms and technologies. J Ambient Intell Hum Comput 9(2):225–251

    Google Scholar 

  10. Masoud M, Jaradat Y, Manasrah A, Jannoud I (2019) Sensors of smart devices in the internet of everything (IoE) era: big opportunities and massive doubts. J Sensors 15

  11. Sun Z, Ke Q, Rahmani H, Bennamoun M, Wang G, Liu J (2022) Human action recognition from various data modalities: A review. IEEE Trans Pattern Analysis and machine intelligence. 22 IEEE

  12. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444

    Google Scholar 

  13. Sze V, Chen Y-H, Yang T-J, Emer JS (2017) Efficient processing of deep neural networks: a tutorial and survey. Proc IEEE 105(12):2295–2329

    Google Scholar 

  14. Gu F, Khoshelham K, Valaee S (2017) Locomotion activity recognition: A deep learning approach. In: 2017 IEEE 28th annual international symposium on personal, indoor, and mobile radio communications (PIMRC), pp 1–5. IEEE

  15. Pei L, Xia S, Chu L, Xiao F, Wu Q, Yu W, Qiu R (2021) Mars: Mixed virtual and real wearable sensors for human activity recognition with multidomain deep learning model. IEEE Internet of Things Journal 8(11):9383–9396

    Google Scholar 

  16. Zhou B, Yang J, Li Q (2019) Smartphone-based activity recognition for indoor localization using a convolutional neural network. Sensors 19(3):621

    Google Scholar 

  17. Nweke HF, Teh YW, Al-Garadi MA, Alo UR (2018) Deep learning algorithms for human activity recognition using mobile and wearable sensor networks: state of the art and research challenges. Expert Syst Appl 105:233–261

    Google Scholar 

  18. Gao Z, Wang D, Wan S, Zhang H, Wang Y (2019) Cognitive-inspired class-statistic matching with triple-constrain for camera free 3d object retrieval. Future Gener Comput Syst 94:641–653

    Google Scholar 

  19. Yin Y, Chen L, Xu Y, Wan J, Zhang H, Mai Z (2020) Qos prediction for service recommendation with deep feature learning in edge computing environment. Mob Netw Appl 25:391–401

    Google Scholar 

  20. Bao L, Intille SS (2004) Activity recognition from user-annotated acceleration data. In: Pervasive computing: second international conference, PERVASIVE 2004, Linz/Vienna, Austria, April 21–23, 2004. Proceedings 2, pp 1–17. Springer, Berlin

  21. Wu W, Dasgupta S, Ramirez EE, Peterson C, Norman GJ et al (2012) Classification accuracies of physical activities using smartphone motion sensors. J Med Internet Res 14(5):e2208

    Google Scholar 

  22. Zhao Y, Li H, Wan S, Sekuboyina A, Hu X, Tetteh G, Piraud M, Menze B (2019) Knowledge-aided convolutional neural network for small organ segmentation. IEEE J Biomed Health Inf 23(4):1363–1373

    Google Scholar 

  23. Reyes-Ortiz J-L, Oneto L, Samà A, Parra X, Anguita D (2016) Transition-aware human activity recognition using smartphones. Neurocomputing 171:754–767

    Google Scholar 

  24. Kwapisz JR, Weiss GM, Moore SA (2011) Activity recognition using cell phone accelerometers. ACM SigKDD Explor Newsl 12(2):74–82

    Google Scholar 

  25. Anguita D, Ghio A, Oneto L, Parra X, Reyes-Ortiz JL (2012) Human activity recognition on smartphones using a multiclass hardware-friendly support vector machine. In: Ambient assisted living and home care: 4th international workshop, IWAAL 2012, Vitoria-Gasteiz, Spain, December 3–5, 2012. Proceedings 4, pp 216–223. Springer, Berlin

  26. Wang Y, Cang S, Yu H (2019) A survey on wearable sensor modality centred human activity recognition in health care. Expert Syst Appl 137:167–190

    Google Scholar 

  27. Chen K, Zhang D, Yao L, Guo B, Yu Z, Liu Y (2021) Deep learning for sensor-based human activity recognition: overview, challenges, and opportunities. ACM Comput Surv 54(4):1–40

    Google Scholar 

  28. Demrozi F, Pravadelli G, Bihorac A, Rashidi P (2020) Human activity recognition using inertial, physiological and environmental sensors: a comprehensive survey. IEEE Access 8:210816–210836

    Google Scholar 

  29. Fu B, Damer N, Kirchbuchner F, Kuijper A (2020) Sensing technology for human activity recognition: a comprehensive survey. IEEE Access 8:83791–83820

    Google Scholar 

  30. Sousa Lima W, Souto E, El-Khatib K, Jalali R, Gama J (2019) Human activity recognition using inertial sensors in a smartphone: an overview. Sensors 19(14):3213

    Google Scholar 

  31. Arshad MH, Bilal M, Gani A (2022) Human activity recognition: review, taxonomy and open challenges. Sensors 22(17):6463

    Google Scholar 

  32. Li Q, Gravina R, Li Y, Alsamhi SH, Sun F, Fortino G (2020) Multi-user activity recognition: challenges and opportunities. Inf Fusion 63:121–135

    Google Scholar 

  33. Qiu S, Zhao H, Jiang N, Wang Z, Liu L, An Y, Zhao H, Miao X, Liu R, Fortino G (2022) Multi-sensor information fusion based on machine learning for real applications in human activity recognition: State-of-the-art and research challenges. Inf Fusion 80:241–265

    Google Scholar 

  34. Ramanujam E, Perumal T, Padmavathi S (2021) Human activity recognition with smartphone and wearable sensors using deep learning techniques: a review. IEEE Sens J 21(12):13029–13040

    Google Scholar 

  35. Subetha T, Chitrakala S (2016) A survey on human activity recognition from videos. In: 2016 international conference on information communication and embedded systems (ICICES), pp 1–7. IEEE

  36. Presti LL, La Cascia M (2016) 3d skeleton-based human action classification: a survey. Pattern Recognit 53:130–147

    Google Scholar 

  37. Kang SM, Wildes RP (2016) Review of action recognition and detection methods. arXiv:1610.06906

  38. Malekzadeh M, Clegg RG, Cavallaro A, Haddadi H (2018) Protecting sensory data against sensitive inferences. In: Proceedings of the 1st workshop on privacy by design in distributed systems, pp 1–6

  39. Asuncion A, Newman D (2007) Uci machine learning repository

  40. Reiss A, Stricker D (2012) Introducing a new benchmarked dataset for activity monitoring. In: 2012 16th international symposium on wearable computers, pp 108–109. IEEE

  41. Stisen A, Blunck H, Bhattacharya S, Prentow TS, Kjærgaard MB, Dey A, Sonne T, Jensen MM (2015) Smart devices are different: assessing and mitigatingmobile sensing heterogeneities for activity recognition. In: Proceedings of the 13th ACM conference on embedded networked sensor systems, pp 127–140

  42. Anguita D, Ghio A, Oneto L, Parra Perez X, Reyes Ortiz JL (2013) A public domain dataset for human activity recognition using smartphones. In: Proceedings of the 21th international European symposium on artificial neural networks, computational intelligence and machine learning, pp 437–442

  43. Chavarriaga R, Sagha H, Calatroni A, Digumarti ST, Tröster G, Millán JdR, Roggen D (2013) The opportunity challenge: a benchmark database for on-body sensor-based activity recognition. Pattern Recognit Lett 34(15):2033–2042

    Google Scholar 

  44. Kleanthous N, Hussain AJ, Khan W, Liatsis P (2020) A new machine learning based approach to predict freezing of gait. Pattern Recognit Lett 140:119–126

    Google Scholar 

  45. Ronald M, Poulose A, Han DS (2021) isplinception: an inception-resnet deep learning architecture for human activity recognition. IEEE Access 9:68985–69001

    Google Scholar 

  46. Davidashvilly S, Hssayeni M, Chi C, Jimenez-Shahed J, Ghoraani B (2022) Activity recognition in parkinson’s patients from motion data using a cnn model trained by healthy subjects. In: 2022 44th annual international conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp 3199–3202. IEEE

  47. Gil-Martín M, San-Segundo R, Fernández-Martínez F, Ferreiros-López J (2021) Time analysis in human activity recognition. Neural Process Lett 53(6):4507–4525

    Google Scholar 

  48. Gochoo M, Tahir SBUD, Jalal A, Kim K (2021) Monitoring real-time personal locomotion behaviors over smart indoor-outdoor environments via body-worn sensors. IEEE Access 9:70556–70570

    Google Scholar 

  49. Mahmud S, Tonmoy M, Bhaumik KK, Rahman AM, Amin MA, Shoyaib M, Khan MAH, Ali AA (2020) Human activity recognition from wearable sensor data using self-attention. arXiv:2003.09018

  50. Xia K, Huang J, Wang H (2020) LSTM-CNN architecture for human activity recognition. IEEE Access 8:56855–56866

    Google Scholar 

  51. Challa SK, Kumar A, Semwal VB (2021) A multibranch cnn-bilstm model for human activity recognition using wearable sensor data. Vis Comput 1–15

  52. Khatun MA, Yousuf MA, Ahmed S, Uddin MZ, Alyami SA, Al-Ashhab S, Akhdar HF, Khan A, Azad A, Moni MA (2022) Deep cnn-lstm with self-attention model for human activity recognition using wearable sensor. IEEE J Transl Eng Health Med 10:1–16

    Google Scholar 

  53. Pang YH, Ping LY, Ling GF, Yin OS, How KW (2021) Stacked deep analytic model for human activity recognition on a uci har database. F1000Research 10

  54. Albahri A, Zaidan A, Albahri O, Zaidan B, Alamoodi A, Shareef AH, Alwan JK, Hamid RA, Aljbory M, Jasim AN et al (2021) Development of iot-based mhealth framework for various cases of heart disease patients. Health Technol 11(5):1013–1033

    Google Scholar 

  55. Allgaier J, Schlee W, Langguth B, Probst T, Pryss R (2021) Predicting the gender of individuals with tinnitus based on daily life data of the trackyourtinnitus mhealth platform. Sci Rep 11(1):1–14

    Google Scholar 

  56. Tsang KC, Pinnock H, Wilson AM, Shah SA (2020) Application of machine learning to support self-management of asthma with mhealth. In: 2020 42nd annual international conference of the IEEE engineering in medicine & biology society (EMBC), pp 5673–5677. IEEE

  57. Bock M, Hölzemann A, Moeller M, Van Laerhoven K (2021) Improving deep learning for har with shallow lstms. In: 2021 International symposium on wearable computers, pp 7–12

  58. Qin Z, Zhang Y, Meng S, Qin Z, Choo K-KR (2020) Imaging and fusing time series for wearable sensor-based human activity recognition. Inf Fusion 53:80–87

    Google Scholar 

  59. Sa-nguannarm P, Elbasani E, Kim B, Kim E-H, Kim J-D (2021) Experimentation of human activity recognition by using accelerometer data based on lstm. In: Advanced multimedia and ubiquitous engineering, pp 83–89. Springer, Berlin

  60. Atkinson G, Metsis V (2020) Identifying label noise in time-series datasets. In: Adjunct proceedings of the 2020 ACM international joint conference on pervasive and ubiquitous computing and proceedings of the 2020 ACM international symposium on wearable computers, pp 238–243

  61. Liu H, Hartmann Y, Schultz T (2021) CSL-share: a multimodal wearable sensor-based human activity dataset

  62. Teng Q, Wang K, Zhang L, He J (2020) The layer-wise training convolutional neural networks using local loss for sensor-based human activity recognition. IEEE Sens J 20(13):7265–7274

    Google Scholar 

  63. Hoelzemann A, Van Laerhoven K (2020) Digging deeper: towards a better understanding of transfer learning for human activity recognition. In: Proceedings of the 2020 international symposium on wearable computers, pp 50–54

  64. Zhang W, Zhu T, Yang C, Xiao J, Ning H (2020c) Sensors-based human activity recognition with convolutional neural network and attention mechanism. In: 2020 IEEE 11th international conference on software engineering and service science (ICSESS), pp 158–162. IEEE

  65. Zhou Yu et al. (2022) A hybrid attention-based deep neural network for simultaneous multi-sensor pruning and human activity recognition. IEEE Int Things J 9(24):25363–25372

    MathSciNet  Google Scholar 

  66. Banos O, Garcia R, Holgado-Terriza JA, Damas M, Pomares H, Rojas I, Saez A, Villalonga C (2014) mhealthdroid: a novel framework for agile development of mobile health applications. In: International workshop on ambient assisted living, pp 91–98. Springer, Berlin

  67. Karpathy A, Toderici G, Shetty S, Leung T, Sukthankar R, Fei-Fei L (2014) Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1725–1732

  68. Micucci D, Mobilio M, Napoletano P (2017) Unimib shar: a dataset for human activity recognition using acceleration data from smartphones. Appl Sci 7(10):1101

    Google Scholar 

  69. Soomro K, Zamir AR, Shah M (2012) UCF101: a dataset of 101 human actions classes from videos in the wild. arXiv:1212.0402

  70. Shahroudy A, Liu J, Ng T-T, Wang G (2016) NTU RGB+D: a large scale dataset for 3d human activity analysis. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1010–1019

  71. Caba Heilbron F, Escorcia V, Ghanem B, Carlos Niebles J (2015) Activitynet: a large-scale video benchmark for human activity understanding. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 961–970

  72. Afza F, Khan MA, Sharif M, Kadry S, Manogaran G, Saba T, Ashraf I, Damaševičius R (2021) A framework of human action recognition using length control features fusion and weighted entropy-variances based feature selection. Image Vis Comput 106:104090

    Google Scholar 

  73. Jaouedi N, Boujnah N, Bouhlel MS (2020) A new hybrid deep learning model for human action recognition. Journal of King Saud University Comput Inf Sci 32(4):447–453

    Google Scholar 

  74. Varshney N, Bakariya B (2021) Deep convolutional neural model for human activities recognition in a sequence of video by combining multiple cnn streams. Multimedia Tools Appl pp 1–13

  75. Wu Zhenyu et al (2020) Privacy-preserving deep action recognition: An adversarial learning framework and a new dataset. IEEE Trans Pattern Analysis Mach Int 44(4): 2126–2139

  76. Umar IM. Ibrahim KM, Gital AY, Zambuk FU, Lawal MA, Yakubu ZI (2022) Hybrid model for human activity recognition using an inflated I3-D two stream convolutional-LSTM network with optical flow mechanism. In: 2022 IEEE Delhi section conference (DELCON), pp 1–7. IEEE

  77. Wang T, Ng WW, Li J, Wu Q, Zhang S, Nugent C, Shewell C (2021) A deep clustering via automatic feature embedded learning for human activity recognition. IEEE Trans Circuit Syst Video Technol 32(1):210–223

    Google Scholar 

  78. Bulbul MF, Ali H (2021) Gradient local auto-correlation features for depth human action recognition. SN Appl Sci 3(5):1–13

    Google Scholar 

  79. Khaled H, Abu-Elnasr O, Elmougy S, Tolba A (2021) Intelligent system for human activity recognition in iot environment. Complex Intell Syst pp 1–12

  80. Cheng Q, Liu Z, Ren Z, Cheng J, Liu J (2022) Spatial-temporal information aggregation and cross-modality interactive learning for RGB-D-based human action recognition. IEEE Access 10:104190–104201

  81. Dong J, Gao Y, Lee HJ, Zhou H, Yao Y, Fang Z, Huang B (2020) Action recognition based on the fusion of graph convolutional networks with high order features. Appl Sci 10(4):1482

    Google Scholar 

  82. Li D, Qiu Z, Pan Y, Yao T, Li H, Mei T (2021) Representing videos as discriminative sub-graphs for action recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 3310–3319

  83. Wang L, Tong Z, Ji B, Wu G (2021) TDN: temporal difference networks for efficient action recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 1895–1904

  84. Chen J, Ho CM (2022) Mm-vit: Multi-modal video transformer for compressed video action recognition. In: Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp 1910–1921

  85. Zhang B, Yu J, Fifty C, Han W, Dai AM, Pang R, Sha F (2021) Co-training transformer with videos and images improves action recognition. arXiv:2112.07175

  86. Duhme M, Memmesheimer R, Paulus D (2021) Fusion-gcn: Multimodal action recognition using graph convolutional networks. In: DAGM German conference on pattern recognition, pp 265–281. Springer, Berlin

  87. Islam MM, Iqbal T (2021) Multi-gat: A graphical attention-based hierarchical multimodal representation learning approach for human activity recognition. IEEE Robot Autom Lett 6(2):1729–1736

    Google Scholar 

  88. Damen D, Doughty H, Farinella GM, Furnari A, Kazakos E, Ma J, Moltisanti D, Munro J, Perrett T, Price W et al (2022) Rescaling egocentric vision: collection, pipeline and challenges for epic-kitchens-100. Int J Comput Vis 130(1):33–55

    Google Scholar 

  89. Huang Z, Qing Z, Wang X, Feng Y, Zhang S, Jiang J, Xia Z, Tang M, Sang N, Ang Jr MH (2021) Towards training stronger video vision transformers for epic-kitchens-100 action recognition. arXiv:2106.05058

  90. Schuldt C, Laptev I, Caputo B (2004) Recognizing human actions: a local SVM approach. In: Proceedings of the 17th international conference on pattern recognition, 2004 (ICPR 2004), vol 3, pp 32–36. IEEE

  91. Kuehne H, Jhuang H, Garrote E, Poggio T, Serre T (2011). HMDB: a large video database for human motion recognition. In: 2011 international conference on computer vision, pp 2556–2563. IEEE

  92. Triboan D, Chen L, Chen F, Wang Z (2019) A semantics-based approach to sensor data segmentation in real-time activity recognition. Future Gener Comput Syst 93:224–236

    Google Scholar 

  93. Noor MHM, Salcic Z, Kevin I, Wang K (2017) Adaptive sliding window segmentation for physical activity recognition using a single tri-axial accelerometer. Pervasive Mob Comput 38:41–59

    Google Scholar 

  94. Liu H, Hartmann Y, Schultz T (2022) A practical wearable sensor-based human activity recognition research pipeline. In: HEALTHINF, pp 847–856

  95. Sarapata G, Morinan G, Dushin Y, Kainz B, Ong J, O’Keeffe J (2022) Video-based activity recognition for automated motor assessment of Parkinson's disease. IEEE J Biomed Health Inform

  96. García S, Luengo J, Herrera F (2015) Data preprocessing in data mining. Springer, Berlin

    Google Scholar 

  97. Heaton J (2018) Ian Goodfellow, Yoshua Bengio, and Aaron Courville: deep learning. Genetic Program Evolvable Mach 19(1–2):305–307

  98. Castro H, Correia V, Sowade E, Mitra K, Rocha J, Baumann R, Lanceros-Méndez S (2016) All-inkjet-printed low-pass filters with adjustable cutoff frequency consisting of resistors, inductors and transistors for sensor applications. Org Electron 38:205–212

    Google Scholar 

  99. Ignatov AD, Strijov VV (2016) Human activity recognition using quasiperiodic time series collected from a single tri-axial accelerometer. Multimed Tools Appl 75:7257–7270

    Google Scholar 

  100. Wang Z, Wu D, Chen J, Ghoneim A, Hossain MA (2016) A triaxial accelerometer-based human activity recognition via eemd-based features and game-theory-based feature selection. IEEE Sens J 16(9):3198–3207

    Google Scholar 

  101. Müller AC, Guido S (2016) Introduction to machine learning with Python: a guide for data scientists. O’Reilly Media, Inc

  102. Guo J, Mu Y, Xiong M, Liu Y, Gu J (2019) Activity feature solving based on tf-idf for activity recognition in smart homes. Complexity 2019:1–10

    Google Scholar 

  103. Xu D, Yan Y, Ricci E, Sebe N (2017) Detecting anomalous events in videos by learning deep representations of appearance and motion. Comput Vision Image Understand 156:117–127

    Google Scholar 

  104. Bhavan A, Aggarwal S (2018) Stacked generalization with wrapper-based feature selection for human activity recognition. In: 2018 IEEE symposium series on computational intelligence (SSCI), pp 1064–1068. IEEE

  105. Li Z, Fan Y, Liu W (2015) The effect of whitening transformation on pooling operations in convolutional autoencoders. EURASIP J Adv Signal Process 2015(1):1–11

    Google Scholar 

  106. Nam W, Dollár P, Han JH (2014) Local decorrelation for improved pedestrian detection. Adv Neural Inf Process Syst 27

  107. Kessy A, Lewin A, Strimmer K (2018) Optimal whitening and decorrelation. Am Stat 72(4):309–314

    MathSciNet  Google Scholar 

  108. Bracewell RN, Bracewell RN (1986) The Fourier transform and its applications, vol 31999. McGraw-Hill, New York

    Google Scholar 

  109. Sejdic E, Djurovic I, Stankovic L (2008) Quantitative performance analysis of scalogram as instantaneous frequency estimator. IEEE Trans Signal Process 56(8):3837–3845

    MathSciNet  Google Scholar 

  110. Gu F, Chung M-H, Chignell M, Valaee S, Zhou B, Liu X (2021) A survey on deep learning for human activity recognition. ACM Comput Surv 54(8):1–34

    Google Scholar 

  111. Zhang S, Li Y, Zhang S, Shahabi F, Xia S, Deng Y, Alshurafa N (2022) Deep learning in human activity recognition with wearable sensors: a review on advances. Sensors 22(4):1476

    Google Scholar 

  112. Zhang X-L, Wu J (2012) Deep belief networks based voice activity detection. IEEE Trans Audio Speech Lang Process 21(4):697–710

    Google Scholar 

  113. Fang H, Hu C (2014) Recognizing human activity in smart home using deep learning algorithm. In: Proceedings of the 33rd Chinese control conference, pp 4716–4720. IEEE

  114. Uddin MZ, Hassan MM, Almogren A, Alamri A, Alrubaian M, Fortino G (2017) Facial expression recognition utilizing local direction-based robust features and deep belief network. IEEE Access 5:4525–4536

    Google Scholar 

  115. Zheng W-L, Zhu J-Y, Peng Y, Lu B-L (2014) Eeg-based emotion classification using deep belief networks. In: 2014 IEEE international conference on multimedia and expo (ICME), pp 1–6. IEEE

  116. Hinton G, Deng L, Yu D, Dahl GE, Mohamed A-R, Jaitly N, Senior A, Vanhoucke V, Nguyen P, Sainath TN et al (2012) Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process Mag 29(6):82–97

    Google Scholar 

  117. Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT Press, Cambridge

    Google Scholar 

  118. Gao X, Luo H, Wang Q, Zhao F, Ye L, Zhang Y (2019) A human activity recognition algorithm based on stacking denoising autoencoder and lightgbm. Sensors 19(4):947

    Google Scholar 

  119. Tang Y, Salakhutdinov R, Hinton G (2012) Robust boltzmann machines for recognition and denoising. In: 2012 IEEE conference on computer vision and pattern recognition, pp 2264–2271. IEEE

  120. Hinton GE, Osindero S, Teh Y-W (2006) A fast learning algorithm for deep belief nets. Neural Comput 18(7):1527–1554

    MathSciNet  Google Scholar 

  121. Bhattacharya S, Lane ND (2016) From smart to deep: Robust activity recognition on smartwatches using deep learning. In: 2016 IEEE international conference on pervasive computing and communication workshops (PerCom Workshops), pp 1–6. IEEE

  122. Plötz T, Hammerla NY, Olivier PL (2011) Feature learning for activity recognition in ubiquitous computing. In: Twenty-second international joint conference on artificial intelligence

  123. Lane ND, Georgiev P, Qendro L (2015) Deepear: robust smartphone audio sensing in unconstrained acoustic environments using deep learning. In: Proceedings of the 2015 ACM international joint conference on pervasive and ubiquitous computing, pp 283–294

  124. Radu V, Lane ND, Bhattacharya S, Mascolo C, Marina MK, Kawsar F (2016) Towards multimodal deep learning for activity recognition on mobile devices. In: Proceedings of the 2016 ACM international joint conference on pervasive and ubiquitous computing: adjunct, pp 185–188

  125. Li Y, Shi D, Ding B, Liu D (2014) Unsupervised feature learning for human activity recognition using smartphone sensors. In: Mining intelligence and knowledge exploration, pp 99–107. Springer, Berlin

  126. Mohammed S, Tashev I (2017) Unsupervised deep representation learning to remove motion artifacts in free-mode body sensor networks. In: 2017 IEEE 14th international conference on wearable and implantable body sensor networks (BSN), pp 183–188. IEEE

  127. Valarezo AE, Rivera LP, Park H, Park N, Kim T-S (2020) Human activities recognition with a single writs imu via a variational autoencoder and android deep recurrent neural nets. Comput Sci Inf Syst 17(2):581–597

    Google Scholar 

  128. Vavoulas G, Chatzaki C, Malliotakis T, Pediaditis M, Tsiknakis M (2016) The mobiact dataset: recognition of activities of daily living using smartphones. In: International conference on information and communication technologies for ageing well and e-health, vol 2, pp 143–151. SciTePress

  129. Gu J, Wang Z, Kuen J, Ma L, Shahroudy A, Shuai B, Liu T, Wang X, Wang G, Cai J et al (2018) Recent advances in convolutional neural networks. Pattern Recognit 77:354–377

    Google Scholar 

  130. Ronao CA, Cho S-B (2016) Human activity recognition with smartphone sensors using deep learning neural networks. Expert Syst Appl 59:235–244

    Google Scholar 

  131. Hughes D, Correll N (2018) Distributed convolutional neural networks for human activity recognition in wearable robotics. In: Distributed autonomous robotic systems, pp 619–631. Springer, Berlin

  132. Dong M, Han J, He Y, Jing X (2018) Har-net: Fusing deep representation and hand-crafted features for human activity recognition. In: International conference on signal and information processing, networking and computers, pp 32–40. Springer, Berlin

  133. Ravi D, Wong C, Lo B, Yang G-Z (2016) Deep learning for human activity recognition: a resource efficient implementation on low-power devices. In: 2016 IEEE 13th international conference on wearable and implantable body sensor networks (BSN), pp 71–76. IEEE

  134. Zeng M, Nguyen LT, Yu B, Mengshoel OJ, Zhu J, Wu P, Zhang J (2014) Convolutional neural networks for human activity recognition using mobile sensors. In: 6th international conference on mobile computing, applications and services, pp 197–205. IEEE

  135. Lee S-M, Yoon, SM, Cho H (2017) Human activity recognition from accelerometer data using convolutional neural network. In: 2017 IEEE international conference on big data and smart computing (bigcomp), pp 131–134. IEEE

  136. Huang W, Zhang L, Gao W, Min F, He J (2021) Shallow convolutional neural networks for human activity recognition using wearable sensors. IEEE Trans Instrum Measur 70:1–11

    Google Scholar 

  137. Yang J, Nguyen MN, San PP, Li XL, Krishnaswamy S (2015) Deep convolutional neural networks on multichannel time series for human activity recognition. In: Twenty-fourth international joint conference on artificial intelligence

  138. Ha S, Choi S (2016) Convolutional neural networks for human activity recognition using multiple accelerometer and gyroscope sensors. In: 2016 international joint conference on neural networks (IJCNN), pp 381–388. IEEE

  139. Lv M, Xu W, Chen T (2019) A hybrid deep convolutional and recurrent neural network for complex activity recognition using multimodal sensors. Neurocomputing 362:33–40

    Google Scholar 

  140. Li X, Luo J, Younes R (2020) Activitygan: generative adversarial networks for data augmentation in sensor-based human activity recognition. In: Adjunct proceedings of the 2020 ACM international joint conference on pervasive and ubiquitous computing and proceedings of the 2020 ACM Iinternational symposium on wearable computers, pp 249–254

  141. Bailador G, Roggen D, Tröster G, Triviño G (2007) Real time gesture recognition using continuous time recurrent neural networks. In: 2nd international ICST conference on body area networks

  142. Zheng L, Li S, Zhu C, Gao Y (2019) Application of indrnn for human activity recognition: the sussex-huawei locomotion-transportation challenge. In: Adjunct proceedings of the 2019 ACM international joint conference on pervasive and ubiquitous computing and proceedings of the 2019 ACM international symposium on wearable computers, pp 869–872

  143. Wang X, Liao W, Guo Y, Yu L, Wang Q, Pan M, Li P (2019c) PERRNN: personalized recurrent neural networks for acceleration-based human activity recognition. In: ICC 2019-2019 IEEE international conference on communications (ICC), pp 1–6. IEEE

  144. Ketykó I, Kovács F, Varga KZ (2019). Domain adaptation for semg-based gesture recognition with recurrent neural networks. In: 2019 international joint conference on neural networks (IJCNN), pp 1–7. IEEE

  145. Zebin T, Sperrin M, Peek N, Casson AJ (2018) Human activity recognition from inertial sensor time-series using batch normalized deep LSTM recurrent networks. In: 2018 40th annual international conference of the IEEE engineering in medicine and biology society (EMBC), pp 1–4. IEEE

  146. Hu Y, Zhang X-Q, Xu L, He FX, Tian Z, She W, Liu W (2020) Harmonic loss function for sensor-based human activity recognition based on lstm recurrent neural networks. IEEE Access 8:135617–135627

    Google Scholar 

  147. Ordóñez FJ, Roggen D (2016) Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition. Sensors 16(1):115

    Google Scholar 

  148. Sutskever I, Martens J, Hinton GE (2011) Generating text with recurrent neural networks. In: ICML

  149. Murad A, Pyun J-Y (2017) Deep recurrent neural networks for human activity recognition. Sensors 17(11):2556

    Google Scholar 

  150. Inoue M, Inoue S, Nishida T (2018) Deep recurrent neural network for mobile human activity recognition with high throughput. Artif Life Robot 23(2):173–185

    Google Scholar 

  151. Guan Y, Plötz T (2017) Ensembles of deep lstm learners for activity recognition using wearables. Proc ACM Interact Mob Wear Ubiquitous Technol 1(2):1–28

    Google Scholar 

  152. Gupta R, Dhindsa IS, Agarwal R (2020) Continuous angular position estimation of human ankle during unconstrained locomotion. Biomed Signal Process Control 60:101968

    Google Scholar 

  153. Okai J, Paraschiakos S, Beekman M, Knobbe A, de Sá CR (2019) Building robust models for human activity recognition from raw accelerometers data using gated recurrent units and long short term memory neural networks. In: 2019 41st annual international conference of the IEEE engineering in medicine and biology society (EMBC), pp 2486–2491. IEEE

  154. Czuszyński K, Rumiński J, Kwaśniewska A (2018) Gesture recognition with the linear optical sensor and recurrent neural networks. IEEE Sens J 18(13):5429–5438

    Google Scholar 

  155. Shi J, Zuo D, Zhang Z (2021) A gan-based data augmentation method for human activity recognition via the caching ability. Internet Technol Lett 4(5):e257

    Google Scholar 

  156. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2020) Generative adversarial networks. Commun ACM 63(11):139–144

    MathSciNet  Google Scholar 

  157. Wang J, Chen Y, Gu Y, Xiao Y, Pan H (2018) Sensorygans: an effective generative adversarial framework for sensor-based human activity recognition. In: 2018 international joint conference on neural networks (IJCNN), pp 1–8. IEEE

  158. Chan MH, Noor MHM (2021) A unified generative model using generative adversarial network for activity recognition. J Ambient Intell Hum Comput 12(7):8119–8128

    Google Scholar 

  159. Soleimani E, Nazerfard E (2021) Cross-subject transfer learning in human activity recognition systems using generative adversarial networks. Neurocomputing 426:26–34

    Google Scholar 

  160. Hao W, Zhang Z (2019) Spatiotemporal distilled dense-connectivity network for video action recognition. Pattern Recognit 92:13–24

    Google Scholar 

  161. Moeslund TB, Hilton A, Krüger V (2006) A survey of advances in vision-based human motion capture and analysis. Comput Vis Image Understand 104(2–3):90–126

    Google Scholar 

  162. Poppe R (2010) A survey on vision-based human action recognition. Image Vision Comput 28(6):976–990

    Google Scholar 

  163. Turaga P, Chellappa R, Subrahmanian VS, Udrea O (2008) Machine recognition of human activities: a survey. IEEE Trans Circuit Syst Video Technol 18(11):1473–1488

    Google Scholar 

  164. Carlsson S, Sullivan J (2001) Action recognition by shape matching to key frames. In: Workshop on models versus exemplars in computer vision, volume 1. Citeseer

  165. Sharma V, Gupta M, Pandey AK, Mishra D, Kumar A (2022) A review of deep learning-based human activity recognition on benchmark video datasets. Appl Artif Intell 36(1):2093705

    Google Scholar 

  166. Carreira J, Zisserman A (2017) Quo vadis, action recognition? a new model and the kinetics dataset. In: proceedings of the IEEE conference on computer vision and pattern recognition, pp 6299–6308

  167. Naieera P, Anu P, Sadiq M (2018) An intelligent action predictor from video using deep learning. In: 2018 international conference on emerging trends and innovations in engineering and technological research (ICETIETR), pp 1–4. IEEE

  168. Zhang S, Wei Z, Nie J, Huang L, Wang S, Li Z (2017) A review on human activity recognition using vision-based method. J Healthc Eng

  169. Zhang B, Wang L, Wang Z, Qiao Y, Wang H (2016). Real-time action recognition with enhanced motion vector CNNs. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2718–2726

  170. Piergiovanni A, Ryoo MS (2019) Representation flow for action recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 9945–9953

  171. Verma KK, Singh BM, Dixit A (2019) A review of supervised and unsupervised machine learning techniques for suspicious behavior recognition in intelligent surveillance system. Int J Inf Technol pp 1–14

  172. Verma KK, Singh BM, Mandoria HL, Chauhan P (2020) Two-stage human activity recognition using 2D-convnet

  173. Varol G, Laptev I, Schmid C (2017) Long-term temporal convolutions for action recognition. IEEE Trans Pattern Anal Machine Intell 40(6):1510–1517

    Google Scholar 

  174. Huang C-D, Wang C-Y, Wang J-C (2015) Human action recognition system for elderly and children care using three stream convnet. In: 2015 international conference on orange technologies (ICOT), pp 5–9. IEEE

  175. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556

  176. Zheng H, Zhang X-M (2020) A cross-modal learning approach for recognizing human actions. IEEE Syst J 15(2):2322–2330

    Google Scholar 

  177. Crasto N, Weinzaepfel P, Alahari K, Schmid C (2019) Mars: Motion-augmented rgb stream for action recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 7882–7891

  178. Yang H, Yuan C, Li B, Du Y, Xing J, Hu W, Maybank SJ (2019) Asymmetric 3D convolutional neural networks for action recognition. Pattern Recognit 85:1–12

    Google Scholar 

  179. Ji S, Xu W, Yang M, Yu K (2012) 3d convolutional neural networks for human action recognition. IEEE Trans Pattern Anal Mach Intell 35(1):221–231

    Google Scholar 

  180. Abdelbaky A, Aly S (2020) Human action recognition based on simple deep convolution network pcanet. In: 2020 international conference on innovative trends in communication and computer engineering (ITCE), pp. 257–262. IEEE

  181. Thameri M, Kammoun A, Abed-Meraim K, Belouchrani A (2011) Fast principal component analysis and data whitening algorithms. In: International workshop on systems, signal processing and their applications, WOSSPA, pp 139–142. IEEE

  182. Zhang H, Liu D, Xiong Z (2019) Two-stream action recognition-oriented video super-resolution. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 8799–8808

  183. Meng Y, Lin C-C, Panda R, Sattigeri P, Karlinsky L, Oliva A, Saenko K, Feris R (2020). AR-NET: adaptive frame resolution for efficient action recognition. In: European conference on computer vision, pp 86–104. Springer, Berlin

  184. Perrett T, Damen D (2019) DDLSTM: dual-domain lstm for cross-dataset action recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 7852–7861

  185. Sun L, Jia K, Chen K, Yeung D-Y, Shi BE, Savarese S (2017). Lattice long short-term memory for human action recognition. In: Proceedings of the IEEE international conference on computer vision, pp 2147–2156

  186. Grushin A, Monner DD, Reggia JA, Mishra A (2013) Robust human action recognition via long short-term memory. In: The 2013 international joint conference on neural networks (IJCNN), pp 1–8. IEEE

  187. Laptev I, Marszalek M, Schmid C, Rozenfeld B (2008) Learning realistic human actions from movies. In: 2008 IEEE conference on computer vision and pattern recognition, pp 1–8. IEEE

  188. Singh B, Marks TK, Jones M, Tuzel O, Shao M (2016) A multi-stream bi-directional recurrent neural network for fine-grained action detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1961–1970

  189. Wu J, Wang G, Yang W, Ji X (2016) Action recognition with joint attention on multi-level deep features. arXiv:1607.02556

  190. Baccouche M, Mamalet F, Wolf C, Garcia C, Baskurt A (2011) Sequential deep learning for human action recognition. In: International workshop on human behavior understanding, pp 29–39. Springer, Berlin

  191. Yue-Hei Ng J, Hausknecht M, Vijayanarasimhan S, Vinyals O, Monga R, Toderici G (2015) Beyond short snippets: deep networks for video classification. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4694–4702

  192. Li Q, Qiu Z, Yao T, Mei T, Rui Y, Luo J (2016) Action recognition by learning deep multi-granular spatio-temporal video representation. In: Proceedings of the 2016 ACM on international conference on multimedia retrieval, pp 159–166

  193. Wang Y, Wang S, Tang J, O’Hare N, Chang Y, Li B (2016) Hierarchical attention network for action recognition in videos. arXiv:1607.06416

  194. Krizhevsky A, Sutskever I, Hinton GE (2017) Imagenet classification with deep convolutional neural networks. Commun ACM 60(6):84–90

    Google Scholar 

  195. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1–9

  196. Ge H, Yan Z, Yu W, Sun L (2019) An attention mechanism based convolutional lstm network for video action recognition. Multimed Tools Appl 78(14):20533–20556

    Google Scholar 

  197. Sudhakaran S, Escalera S, Lanz O (2019) LSTA: long short-term attention for egocentric action recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 9954–9963

  198. Meng L, Zhao B, Chang B, Huang G, Sun W, Tung F, Sigal L (2019) Interpretable spatio-temporal attention for video action recognition. In: Proceedings of the IEEE/CVF international conference on computer vision workshops, pp 0–0

  199. Wu Z, Xiong C, Ma C-Y, Socher R, Davis LS (2019) Adaframe: adaptive frame selection for fast video recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 1278–1287

  200. Li Z, Gavrilyuk K, Gavves E, Jain M, Snoek CG (2018) Videolstm convolves, attends and flows for action recognition. Comput Vision Image Understand 166:41–50

    Google Scholar 

  201. Liu Z, Li Z, Wang R, Zong M, Ji W (2020) Spatiotemporal saliency-based multi-stream networks with attention-aware lstm for action recognition. Neural Comput Appl 32(18):14593–14602

    Google Scholar 

  202. Du Y, Wang W, Wang L (2015) Hierarchical recurrent neural network for skeleton based action recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1110–1118

  203. Li Y, Lan C, Xing J, Zeng W, Yuan C, Liu J (2016) Online human action detection using joint classification-regression recurrent neural networks. In: European conference on computer vision, pp 203–220. Springer, Berlin

  204. Liu J, Shahroudy A, Xu D, Wang G (2016) Spatio-temporal lstm with trust gates for 3d human action recognition. In: European conference on computer vision, pp 816–833. Springer, Berlin

  205. Mahasseni B, Todorovic S (2016) Regularizing long short term memory with 3d human-skeleton sequences for action recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3054–3062

  206. Zhao L, Song Y, Zhang C, Liu Y, Wang P, Lin T, Deng M, Li H (2019) T-GCN: a temporal graph convolutional network for traffic prediction. IEEE Trans Intell Transp Syst 21(9):3848–3858

    Google Scholar 

  207. Si C, Jing Y, Wang W, Wang L, Tan T (2018) Skeleton-based action recognition with spatial reasoning and temporal stack learning. In: Proceedings of the European conference on computer vision (ECCV), pp 103–118

  208. Yan S, Xiong Y, Lin D (2018). Spatial temporal graph convolutional networks for skeleton-based action recognition. In: Thirty-second AAAI conference on artificial intelligence

  209. Cheng K, Zhang Y, Cao C, Shi L, Cheng J, Lu H (2020) Decoupling gcn with dropgraph module for skeleton-based action recognition. In: European conference on computer vision, pp 536–553. Springer, Berlin

  210. Korban M, Li X (2020) DDGCN: a dynamic directed graph convolutional network for action recognition. In: European conference on computer vision, pp 761–776. Springer, Berlin

  211. Yu P, Zhao Y, Li C, Yuan J, Chen C (2020) Structure-aware human-action generation. In: European conference on computer vision, pp 18–34. Springer, Berlin

  212. Zhang X, Xu C, Tian X, Tao D (2019) Graph edge convolutional neural networks for skeleton-based action recognition. IEEE Trans Neural Netw Learn Syst 31(8):3047–3060

    Google Scholar 

  213. Li M, Chen S, Chen X, Zhang Y, Wang Y, Tian Q (2019) Actional-structural graph convolutional networks for skeleton-based action recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 3595–3603

  214. Peng W, Hong X, Chen H, Zhao G (2020) Learning graph convolutional network for skeleton-based human action recognition by neural searching. In: Proceedings of the AAAI conference on artificial intelligence, vol 34, pp 2669–2676

    Google Scholar 

  215. Zhang X, Xu C, Tao D (2020d) Context aware graph convolution for skeleton-based action recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 14333–14342

  216. Wu C, Wu X-J, Kittler J (2019) Spatial residual layer and dense connection block enhanced spatial temporal graph convolutional network for skeleton-based action recognition. In: Proceedings of the IEEE/CVF international conference on computer vision workshops, pp 0–0

  217. Li B, Li X, Zhang Z, Wu F (2019) Spatio-temporal graph routing for skeleton-based action recognition. In: Proceedings of the AAAI conference on artificial intelligence, vol 33, pp 8561–8568

    Google Scholar 

  218. Liu Z, Zhang H, Chen Z, Wang Z, Ouyang W (2020) Disentangling and unifying graph convolutions for skeleton-based action recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 143–152

  219. Si C, Chen W, Wang W, Wang L, Tan T (2019) An attention enhanced graph convolutional lstm network for skeleton-based action recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 1227–1236

  220. Zhang P, Lan C, Zeng W, Xing J, Xue J, Zheng N (2020) Semantics-guided neural networks for efficient skeleton-based human action recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 1112–1121

  221. Wen Y-H, Gao L, Fu H, Zhang F-L, Xia S (2019) Graph cnns with motif and variable temporal block for skeleton-based action recognition. In: Proceedings of the AAAI conference on artificial intelligence 33:8989–8996

    Google Scholar 

  222. Cheng K, Zhang Y, He X, Chen W, Cheng J, Lu H (2020) Skeleton-based action recognition with shift graph convolutional network. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 183–192

  223. Song Y.-F, Zhang Z, Shan C, Wang L (2022) Constructing stronger and faster baselines for skeleton-based action recognition. IEEE Trans Pattern Anal Machine Intell

  224. Li M, Chen S, Chen X, Zhang Y, Wang Y, Tian Q (2021) Symbiotic graph neural networks for 3d skeleton-based human action recognition and motion prediction. IEEE Trans Pattern Anal Mach Intell 44(6):3316–3333

    Google Scholar 

  225. Vincent P, Larochelle H, Bengio Y, Manzagol P-A (2008) Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th international conference on machine learning, pp 1096–1103

  226. Holyoak KJ (1987) Parallel distributed processing: explorations in the microstructure of cognition. Science 236:992–997

    Google Scholar 

  227. Cilimkovic M (2015) Neural networks and back propagation algorithm. Institute of Technology Blanchardstown, Blanchardstown Road North Dublin 15(1)

  228. Ranzato M, Huang FJ, Boureau Y-L, LeCun Y (2007) Unsupervised learning of invariant feature hierarchies with applications to object recognition. In: 2007 IEEE conference on computer vision and pattern recognition, pp 1–8. IEEE

  229. Baccouche M, Mamalet F, Wolf C, Garcia C, Baskurt A (2012) Spatio-temporal convolutional sparse auto-encoder for sequence classification. In: BMVC, pp 1–12. Citeseer

  230. Valstar MF, Jiang B, Mehu M, Pantic M, Scherer K (2011) The first facial expression recognition and analysis challenge. In: 2011 IEEE international conference on automatic face & gesture recognition (FG), pp 921–926. IEEE

  231. Budiman A, Fanany MI, Basaruddin C (2014) Stacked denoising autoencoder for feature representation learning in pose-based action recognition. In: 2014 IEEE 3rd global conference on consumer electronics (GCCE), pp 684–688. IEEE

  232. Hasan M, Roy-Chowdhury AK (2014) Continuous learning of human activity models using deep nets. In: European conference on computer vision, pp 705–720. Springer, Berlin

  233. Wu DX, Pan W, Xie LD, Huang CX (2014) An adaptive stacked denoising auto-encoder architecture for human action recognition. In: Applied mechanics and materials, vol 631, pp 403–409. Trans Tech Publ

  234. Zhang Z (2012) Microsoft kinect sensor and its effect. IEEE Multimed 19(2):4–10

    Google Scholar 

  235. Laptev I (2005) On space-time interest points. Int J Computer Vis 64(2):107–123

    Google Scholar 

  236. Mehta V, Dhall A, Pal S, Khan SS (2021) Motion and region aware adversarial learning for fall detection with thermal imaging. In: 2020 25th international conference on pattern recognition (ICPR), pp 6321–6328. IEEE

  237. Wang J, Zhang X, Gao Q, Yue H, Wang H (2016) Device-free wireless localization and activity recognition: a deep learning approach. IEEE Trans Veh Technol 66(7):6258–6267

    Google Scholar 

  238. Gao Q, Wang J, Ma X, Feng X, Wang H (2017) Csi-based device-free wireless localization and activity recognition using radio image features. IEEE Trans Veh Technol 66(11):10346–10356

    Google Scholar 

  239. Chen M, Xu Z, Weinberger KQ, Sha F (2012) Marginalized stacked denoising autoencoders. In: Proceedings of the learning workshop, Utah, UT, USA, vol 36

  240. Gu F, Flórez-Revuelta F, Monekosso D, Remagnino P (2015) Marginalised stacked denoising autoencoders for robust representation of real-time multi-view action recognition. Sensors 15(7):17209–17231

    Google Scholar 

  241. Tang Y, Zhang L, Min F, He J (2022) Multiscale deep feature learning for human activity recognition using wearable sensors. IEEE Trans Ind Electron 70(2):2106–2116

    Google Scholar 

  242. Sunasra M (2017) Performance metrics for classification problems in machine learning. Medium recuperado de. https://medium.com/thalusai/performance-metrics-for-classification-problems-in-machine-learningpart-i-b085d432082b

  243. Bhattacharya D, Sharma D, Kim W, Ijaz MF, Singh PK (2022) Ensem-har: An ensemble deep learning model for smartphone sensor-based human activity recognition for measurement of elderly health monitoring. Biosensors 12(6):393

    Google Scholar 

  244. Fawcett T (2006) An introduction to roc analysis. Pattern Recognit Lett 27(8):861–874

    MathSciNet  Google Scholar 

  245. Challa SK, Kumar A, Semwal VB (2022) A multibranch cnn-bilstm model for human activity recognition using wearable sensor data. Vis Comput 38(12):4095–4109

    Google Scholar 

  246. Deng Z, Zhai M, Chen L, Liu Y, Muralidharan S, Roshtkhari MJ, Mori G (2015) Deep structured models for group activity recognition. arXiv:1506.04191

  247. Braun A, Musse SR, de Oliveira LPL, Bodmann BE (2003) Modeling individual behaviors in crowd simulation. In: Proceedings 11th IEEE international workshop on program comprehension, pp 143–148. IEEE

  248. Wirz M, Franke T, Roggen D, Mitleton-Kelly E, Lukowicz P, Tröster G (2012) Inferring crowd conditions from pedestrians’ location traces for real-time crowd monitoring during city-scale mass gatherings. In: 2012 IEEE 21st international workshop on enabling technologies: infrastructure for collaborative enterprises, pp 367–372. IEEE

  249. Sunil A, Sheth MH, Shreyas E, et al (2021) Usual and unusual human activity recognition in video using deep learning and artificial intelligence for security applications. In: 2021 fourth international conference on electrical, computer and communication technologies (ICECCT), pp 1–6. IEEE

  250. Mohan A, Choksi M, Zaveri MA (2019) Anomaly and activity recognition using machine learning approach for video based surveillance. In: 2019 10th international conference on computing, communication and networking technologies (ICCCNT), pp 1–6. IEEE

  251. Igwe OM, Wang Y, Giakos GC, Fu J (2020) Human activity recognition in smart environments employing margin setting algorithm. J Ambient Intell Hum Comput pp 1–13

  252. Jiang Y, Wang J, Liang Y, Xia J (2019) Combining static and dynamic features for real-time moving pedestrian detection. Multimed Tools Appl 78(3):3781–3795

    Google Scholar 

  253. Parthasarathy P, Vivekanandan S, et al (2019) Detection of suspicious human activity based on cnn-dbnn algorithm for video surveillance applications. In: 2019 innovations in power and advanced computing technologies (i-PACT), vol 1, pp 1–7. IEEE

  254. Hammerla NY, Fisher J, Andras P, Rochester L, Walker R, Plötz T (2015) Pd disease state assessment in naturalistic environments using deep learning. In: Twenty-Ninth AAAI conference on artificial intelligence

  255. Um TT, Pfister FM, Pichler D, Endo S, Lang M, Hirche S, Fietzek U, Kulić D (2017) Data augmentation of wearable sensor data for parkinson’s disease monitoring using convolutional neural networks. In: Proceedings of the 19th ACM international conference on multimodal interaction, pp 216–220

  256. Nemati E, Zhang S, Ahmed T, Rahman MM, Kuang J, Gao A (2021) Coughbuddy: multi-modal cough event detection using earbuds platform. In: 2021 IEEE 17th international conference on wearable and implantable body sensor networks (BSN), pp 1–4. IEEE

  257. Xu X, Nemati E, Vatanparvar K, Nathan V, Ahmed T, Rahman MM, McCaffrey D, Kuang J, Gao JA (2021) Listen2cough: leveraging end-to-end deep learning cough detection model to enhance lung health assessment using passively sensed audio. Proc ACM Interact Mob Wear Ubiquitous Technol 5(1):1–22

    Google Scholar 

  258. Zhang S, Nemati E, Ahmed T, Rahman MM, Kuang J, Gao A (2021) A novel multi-centroid template matching algorithm and its application to cough detection. In: 2021 43rd annual international conference of the IEEE engineering in medicine & biology society (EMBC), pp 7598–7604. IEEE

  259. Arifoglu D, Bouchachia A (2017) Activity recognition and abnormal behaviour detection with recurrent neural networks. Procedia Comput Sci 110:86–93

    Google Scholar 

  260. Ghandeharioun A, Fedor S, Sangermano L, Ionescu D, Alpert J, Dale C, Sontag D, Picard R (2017) Objective assessment of depressive symptoms with machine learning and wearable sensors data. In: 2017 seventh international conference on affective computing and intelligent interaction (ACII), pp 325–332. IEEE

  261. Gao Y, Long Y, Guan Y, Basu A, Baggaley J, Ploetz T (2019) Towards reliable, automated general movement assessment for perinatal stroke screening in infants using wearable accelerometers. Proc ACM Interact Mob Wearable Ubiquitous Technol 3(1):1–22

    Google Scholar 

  262. Parajuli N, Sreenivasan N, Bifulco P, Cesarelli M, Savino S, Niola V, Esposito D, Hamilton TJ, Naik GR, Gunawardana U et al (2019) Real-time emg based pattern recognition control for hand prostheses: a review on existing methods, challenges and future implementation. Sensors 19(20):4596

    Google Scholar 

  263. Samuel OW, Asogbon MG, Geng Y, Al-Timemy AH, Pirbhulal S, Ji N, Chen S, Fang P, Li G (2019) Intelligent emg pattern recognition control method for upper-limb multifunctional prostheses: advances, current challenges, and future prospects. IEEE Access 7:10150–10165

    Google Scholar 

  264. Taylor W, Shah SA, Dashtipour K, Zahid A, Abbasi QH, Imran MA (2020) An intelligent non-invasive real-time human activity recognition system for next-generation healthcare. Sensors 20(9):2653

    Google Scholar 

  265. Dashtipour K, Gogate M, Cambria E, Hussain A (2021) A novel context-aware multimodal framework for persian sentiment analysis. Neurocomputing 457:377–388

    Google Scholar 

  266. Wu W, Zhang H, Pirbhulal S, Mukhopadhyay SC, Zhang Y-T (2015) Assessment of biofeedback training for emotion management through wearable textile physiological monitoring system. IEEE Sens J 15(12):7087–7095

    Google Scholar 

  267. Yin Z, Zhao M, Wang Y, Yang J, Zhang J (2017) Recognition of emotions using multimodal physiological signals and an ensemble deep learning model. Comput Methods Progr Biomed 140:93–110

    Google Scholar 

  268. Hssayeni MD, Ghoraani B (2021) Multi-modal physiological data fusion for affect estimation using deep learning. IEEE Access 9:21642–21652

    Google Scholar 

  269. Khezri M, Firoozabadi M, Sharafat AR (2015) Reliable emotion recognition system based on dynamic adaptive fusion of forehead biopotentials and physiological signals. Compur Methods Progr Biomed 122(2):149–164

    Google Scholar 

  270. Mohino-Herranz I, Gil-Pita R, García-Gómez J, Rosa-Zurera M, Seoane F (2020) A wrapper feature selection algorithm: an emotional assessment using physiological recordings from wearable sensors. Sensors 20(1):309

    Google Scholar 

  271. Pareek P, Thakkar A (2021) A survey on video-based human action recognition: recent updates, datasets, challenges, and applications. Artif Intell Rev 54(3):2259–2322

    Google Scholar 

  272. Chintalapati S, Raghunadh M (2013) Automated attendance management system based on face recognition algorithms. In: 2013 IEEE international conference on computational intelligence and computing research, pp 1–5. IEEE

  273. Lim JH, Teh EY, Geh MH, Lim CH (2017) Automated classroom monitoring with connected visioning system. In: 2017 Asia-Pacific signal and information processing association annual summit and conference (APSIPA ASC), pp 386–393. IEEE

  274. Aldrich FK (2003) Smart homes: past, present and future. In: Inside the smart home, pp. 17–39. Springer, Berlin

  275. Rodriguez E, Chan K (2016) Smart interactions for home healthcare: a semantic shift. Int J Arts Technol 9(4):299–319

    Google Scholar 

  276. Moreno LV, Ruiz MLM, Hernández JM, Duboy MÁV, Lindén M (2017) The role of smart homes in intelligent homecare and healthcare environments. In: Ambient assisted living and enhanced living environments, pp 345–394. Elsevier

  277. Alsamhi SH, Ma O, Ansari M, Meng Q et al (2019) Greening internet of things for greener and smarter cities: a survey and future prospects. Telecommun Syst 72(4):609–632

    Google Scholar 

  278. Howedi A, Lotfi A, Pourabdollah A (2019). Distinguishing activities of daily living in a multi-occupancy environment. In: Proceedings of the 12th ACM international conference on pervasive technologies related to assistive environments, pp 568–574

  279. Oukrich N, Maach A, Sabri E, Mabrouk E, Bouchard K (2016) Activity recognition using back-propagation algorithm and minimum redundancy feature selection method. In: 2016 4th IEEE international colloquium on information science and technology (CiSt), pp 818–823. IEEE

  280. Wilson DH, Atkeson C (2005) Simultaneous tracking and activity recognition (star) using many anonymous, binary sensors. In: International conference on pervasive computing, pp 62–79. Springer, Berlin

  281. Wang L, Gu T, Tao X, Lu J (2009) Sensor-based human activity recognition in a multi-user scenario. In: European conference on ambient intelligence, pp 78–87. Springer, Berlin

  282. Gu T, Wu Z, Wang L, Tao X, Lu J (2009). Mining emerging patterns for recognizing activities of multiple users in pervasive computing. In: 2009 6th annual international mobile and ubiquitous systems: networking and services, MobiQuitous, pp 1–10. IEEE

  283. Yu BX, Chang J, Liu L, Tian Q, Chen CW (2022) Towards a unified view on visual parameter-efficient transfer learning. arXiv:2210.00788

  284. Mittal H, Morgado P, Jain U, Gupta A (2022) Learning state-aware visual representations from audible interactions. arXiv:2209.13583

  285. Devlin J, Chang M-W, Lee K, Toutanova K (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805

  286. Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A et al (2020) Language models are few-shot learners. Adv Neural Information Process Syst 33:1877–1901

    Google Scholar 

  287. Chowdhery A, Narang S, Devlin J, Bosma M, Mishra G, Roberts A, Barham P, Chung HW, Sutton C, Gehrmann S, et al (2022) Palm: Scaling language modeling with pathways. arXiv:2204.02311

  288. He J, Zhou C, Ma X, Berg-Kirkpatrick T, Neubig G (2021) Towards a unified view of parameter-efficient transfer learning. arXiv:2110.04366

  289. Bruce X, Liu Y, Zhang X, Zhong S-h, Chan KC (2022) Mmnet: A model-based multimodal network for human action recognition in rgb-d videos. IEEE Trans Pattern Anal Mach Intell

  290. Liu L, Yu BX, Chang J, Tian Q, Chen C-W (2022) Prompt-matched semantic segmentation. arXiv:2208.10159

  291. Radford A, Kim JW, Hallacy C, Ramesh A, Goh G, Agarwal S, Sastry G, Askell A, Mishkin P, Clark J, et al (2021) Learning transferable visual models from natural language supervision. In: International conference on machine learning, pp 8748–8763. PMLR

  292. Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780

    Google Scholar 

  293. Voelker A, Kajić I, Eliasmith C (2019) Legendre memory units: continuous-time representation in recurrent neural networks. Adv Neural Inf Process Syst 32

  294. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł. Polosukhin I (2017) Attention is all you need. Adv Neural Information Process Syst 30

  295. Takase S, Kiyono S (2021) Lessons on parameter sharing across layers in transformers. arXiv:2104.06022

  296. Zhou H, Zhang S, Peng J, Zhang S, Li J, Xiong H, Zhang W (2021) Informer: Beyond efficient transformer for long sequence time-series forecasting. In: Proceedings of the AAAI conference on artificial intelligence, vol 35, pp 11106–11115

    Google Scholar 

  297. Wei C, Fan H, Xie S, Wu C-Y, Yuille A, Feichtenhofer C (2022) Masked feature prediction for self-supervised visual pre-training. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 14668–14678

  298. Vicente-Sola A, Manna DL, Kirkland P, Di Caterina G, Bihl T (2022) Evaluating the temporal understanding of neural networks on event-based action recognition with dvs-gesture-chain. arXiv:2209.14915

  299. Arandjelovic R, Zisserman A (2017) Look, listen and learn. In: Proceedings of the IEEE international conference on computer vision, pp 609–617

  300. Arandjelovic R, Zisserman A (2018) Objects that sound. In: Proceedings of the European conference on computer vision (ECCV), pp 435–451

  301. Korbar B, Tran D, Torresani L (2018) Cooperative learning of audio and video models from self-supervised synchronization. Adv Neural Inf Process Syst 31

  302. Alwassel H, Mahajan D, Korbar B, Torresani L, Ghanem B, Tran D (2020) Self-supervised learning by cross-modal audio-video clustering. Adv Neural Inf Process Syst 33:9758–9770

    Google Scholar 

  303. Asano Y, Patrick M, Rupprecht C, Vedaldi A (2020) Labelling unlabelled videos from scratch with multi-modal self-supervision. Adv Neural Inf Process Syst 33:4660–4671

    Google Scholar 

  304. Morgado P, Li Y, Nvasconcelos N (2020) Learning representations from audio-visual spatial alignment. Adv Neural Inf Process Syst 33:4733–4744

    Google Scholar 

  305. Piergiovanni A, Angelova A, Ryoo MS (2020) Evolving losses for unsupervised video representation learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 133–142

  306. Morgado P, Misra I, Vasconcelos N (2021) Robust audio-visual instance discrimination. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 12934–12945

  307. Morgado P, Vasconcelos N, Misra I (2021) Audio-visual instance discrimination with cross-modal agreement. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 12475–12486

  308. Patrick M, Asano YM, Kuznetsova P, Fong R, Henriques JF, Zweig G, Vedaldi A (2020) Multi-modal self-supervision from generalized data transformations. arXiv:2003.04298

  309. Wang X, Cai Z, Gao D, Vasconcelos N (2019) Towards universal object detection by domain attention. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 7289–7298

  310. Zhou X, Koltun V, Krähenbühl P (2022) Simple multi-dataset detection. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 7571–7580

  311. Lambert J, Liu Z, Sener O, Hays J, Koltun V (2020) MSEG: a composite dataset for multi-domain semantic segmentation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 2879–2888

  312. Munro J, Damen D (2020). Multi-modal domain adaptation for fine-grained action recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 122–132

  313. Song X, Zhao S, Yang J, Yue H, Xu P, Hu R, Chai H (2021) Spatio-temporal contrastive domain adaptation for action recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 9787–9795

  314. Ghadiyaram D, Tran D, Mahajan D (2019) Large-scale weakly-supervised pre-training for video action recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 12046–12055

  315. Duan H, Zhao Y, Xiong Y, Liu W, Lin D (2020) Omni-sourced webly-supervised learning for video recognition. In: European conference on computer vision, pp 670–688. Springer, Berlin

  316. Akbari H, Yuan L, Qian R, Chuang W-H, Chang S-F, Cui Y, Gong B (2021) Vatt: Transformers for multimodal self-supervised learning from raw video, audio and text. Adv Neural Inf Process Syst 34:24206–24221

    Google Scholar 

  317. Likhosherstov V, Arnab A, Choromanski K, Lucic M, Tay Y, Weller A, Dehghani M (2021) Polyvit: co-training vision transformers on images, videos and audio. arXiv:2111.12993

  318. Liang J, Zhang E, Zhang J, Shen C (2022) Multi-dataset training of transformers for robust action recognition. arXiv:2209.12362

  319. Zheng N, Wen J, Liu R, Long L, Dai J, Gong Z (2018) Unsupervised representation learning with long-term dynamics for skeleton based action recognition. In: Proceedings of the AAAI conference on artificial intelligence, vol 32

  320. Su K, Liu X, Shlizerman E (2020). Predict & cluster: unsupervised skeleton based action recognition. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 9631–9640

  321. Lin L, Song S, Yang W, Liu J (2020) Ms2l: multi-task self-supervised learning for skeleton based action recognition. In: Proceedings of the 28th ACM international conference on multimedia, pp 2490–2498

  322. Guo T, Liu H, Chen Z, Liu M, Wang T, Ding R (2022) Contrastive learning from extremely augmented skeleton sequences for self-supervised action recognition. Proc AAAI Conf Artif Intell 36:762–770

    Google Scholar 

  323. Thoker FM, Doughty H, Snoek CG (2021) Skeleton-contrastive 3d action representation learning. In: Proceedings of the 29th ACM international conference on multimedia, pp 1655–1663

  324. Li L, Wang M, Ni B, Wang H, Yang J, Zhang W (2021c) 3D human action representation learning via cross-view consistency pursuit. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 4741–4750

  325. Mao Y, Zhou W, Lu Z, Deng J, Li H (2022). CMD: self-supervised 3d action representation learning with cross-modal mutual distillation. arXiv:2208.12448

  326. Jain Y, Tang CI, Min C, Kawsar F, Mathur A (2022) Collossl: Collaborative self-supervised learning for human activity recognition. Proc ACM Interact Mob Wear Ubiquitous Technol 6(1):1–28

    Google Scholar 

  327. Tran T, Do T-T, Reid I, Carneiro G (2019) Bayesian generative active deep learning. In: International conference on machine learning, pp 6295–6304. PMLR

  328. Seyfioğlu MS, Özbayoğlu AM, Gürbüz SZ (2018) Deep convolutional autoencoder for radar-based classification of similar aided and unaided human activities. IEEE Trans Aerosp Electron Syst 54(4):1709–1723

    Google Scholar 

  329. Zou H, Zhou Y, Yang J, Jiang H, Xie L, Spanos CJ (2018) Deepsense: device-free human activity recognition via autoencoder long-term recurrent convolutional network. In: 2018 IEEE international conference on communications (ICC), pp 1–6. IEEE

  330. Abedin A, Ehsanpour M, Shi Q, Rezatofighi H, Ranasinghe DC (2021) Attend and discriminate: beyond the state-of-the-art for human activity recognition using wearable sensors. Proc ACM Interact Mob Wear Ubiquitous Technol 5(1):1–22

    Google Scholar 

  331. Huynh-The T, Hua C-H, Tu NA, Kim D-S (2020) Physical activity recognition with statistical-deep fusion model using multiple sensory data for smart health. IEEE Internet Things J 8(3):1533–1543

    Google Scholar 

  332. Hanif MA, Akram T, Shahzad A, Khan MA, Tariq U, Choi J-I, Nam Y, Zulfiqar Z (2022) Smart devices based multisensory approach for complex human activity recognition

  333. Pires IM, Pombo N, Garcia NM, Flórez-Revuelta F (2018) Multi-sensor mobile platform for the recognition of activities of daily living and their environments based on artificial neural networks. In: IJCAI, pp 5850–5852

  334. Zhang X, Yao L, Huang C, Wang S, Tan M, Long G, Wang C (2018) Multi-modality sensor data classification with selective attention. arXiv:1804.05493

  335. Wang Y, Yao Q, Kwok JT, Ni LM (2020) Generalizing from a few examples: a survey on few-shot learning. ACM Comput Surv 53(3):1–34

    Google Scholar 

  336. Xie T-T, Tzelepis C, Fu F, Patras I (2021) Few-shot action localization without knowing boundaries. In: Proceedings of the 2021 international conference on multimedia retrieval, pp 339–348

  337. Liu J, Shahroudy A, Perez M, Wang G, Duan L-Y, Kot AC (2019) Ntu rgb+ d 120: A large-scale benchmark for 3d human activity understanding. IEEE Trans Pattern Anal Mach Intell 42(10):2684–2701

    Google Scholar 

  338. Zhang H, Zhang L, Qi X, Li H, Torr PH, Koniusz P (2020) Few-shot action recognition with permutation-invariant attention. In: European conference on computer vision, pp 525–542. Springer, Berlin

  339. Dai R, Lu C, Avidan M, Kannampallil T (2021) Respwatch: Robust measurement of respiratory rate on smartwatches with photoplethysmography. In: Proceedings of the international conference on internet-of-things design and implementation

  340. Li C, Niu D, Jiang B, Zuo X, Yang J (2021) Meta-har: federated representation learning for human activity recognition. In: Proceedings of the web conference, pp 912–922

    Google Scholar 

  341. Mothukuri V, Parizi RM, Pouriyeh S, Huang Y, Dehghantanha A, Srivastava G (2021) A survey on security and privacy of federated learning. Future Gener Comput Syst 115:619–640

    Google Scholar 

  342. Xiao Z, Xu X, Xing H, Song F, Wang X, Zhao B (2021) A federated learning system with enhanced feature extraction for human activity recognition. Knowl-Based Syst 229:107338

    Google Scholar 

  343. Pham HH, Khoudour L, Crouzil A, Zegers P, Velastin SA (2022) Video-based human action recognition using deep learning: a review. arXiv:2208.03775

  344. Ioffe S, Szegedy C (2015) Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: International conference on machine learning, pp 448–456. PMLR

  345. Orchard G, Jayawant A, Cohen GK, Thakor N (2015) Converting static image datasets to spiking neuromorphic datasets using saccades. Front Neurosci 9:437

    Google Scholar 

  346. Li H, Liu H, Ji X, Li G, Shi L (2017) Cifar10-dvs: an event-stream dataset for object classification. Front Neurosci 11:309

    Google Scholar 

  347. Posch C, Matolin D, Wohlgenannt R (2010) A QVGA 143 db dynamic range frame-free pwm image sensor with lossless pixel-level video compression and time-domain cds. IEEE J Solid-State Circuit 46(1):259–275

    Google Scholar 

  348. Lichtsteiner P, Posch C, Delbruck T (2008) A 128 128 120 db 15s latency asynchronous temporal contrast vision sensor. IEEE J Solid-State Circuit 43(2):566–576

    Google Scholar 

  349. Cao Z, Simon T, Wei S-E, Sheikh Y (2017) Realtime multi-person 2d pose estimation using part affinity fields. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7291–7299

  350. Xu J, Yu Z, Ni B, Yang J, Yang X, Zhang W (2020) Deep kinematics analysis for monocular 3D human pose estimation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 899–908

  351. Nie Q, Liu Z, Liu Y (2020) Unsupervised 3d human pose representation with viewpoint and pose disentanglement. In: European conference on computer vision, pp 102–118. Springer, Berlin

  352. Deng J, Pan Y, Yao T, Zhou W, Li H, Mei T (2020) Single shot video object detector. IEEE Trans Multimed 23:846–858

    Google Scholar 

  353. Deng J, Yang Z, Liu D, Chen T, Zhou W, Zhang Y, Li H, Ouyang W (2022) Transvg++: end-to-end visual grounding with language conditioned vision transformer. arXiv:2206.06619

  354. Rao H, Xu S, Hu X, Cheng J, Hu B (2021) Augmented skeleton based contrastive action learning with momentum lstm for unsupervised action recognition. Inf Sci 569:90–109

    Google Scholar 

  355. Purushwalkam S, Ye T, Gupta S, Gupta A (2020) Aligning videos in space and time. In: European conference on computer vision, pp 262–278. Springer, Berlin

  356. Recasens A, Luc P, Alayrac J-B, Wang L, Strub F, Tallec C, Malinowski M, Pătrăucean V, Altché F, Valko M et al (2021) Broaden your views for self-supervised video learning. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 1255–1265

  357. Abdel-Basset M, Hawash H, Chang V, Chakrabortty RK, Ryan M (2020) Deep learning for heterogeneous human activity recognition in complex iot applications. IEEE Internet Things J

  358. Arjovsky M, Bottou L, Gulrajani I, Lopez-Paz D (2019) Invariant risk minimization. arXiv:1907.02893

  359. Konečnỳ J, McMahan B, Ramage D (2015) Federated optimization: distributed optimization beyond the datacenter. arXiv:1511.03575

  360. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  361. Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4700–4708

  362. Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S, et al (2020) An image is worth 16x16 words: Transformers for image recognition at scale. arXiv:2010.11929

  363. Arnab A, Dehghani M, Heigold G, Sun C, Lučić M, Schmid C (2021) Vivit: a video vision transformer. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 6836–6846

  364. Bertasius G, Wang H, Torresani L (2021) Is space-time attention all you need for video understanding? In: ICML, vol 2, p 4

  365. Liang J, Zhu H, Zhang E, Zhang J (2022) Stargazer: a transformer-based driver action detection system for intelligent transportation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 3160–3167

Download references

Acknowledgements

NOT APPLICABLE

Funding

NOT APPLICABLE

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pranjal Kumar.

Ethics declarations

Conflict of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Consent to Participate

Not Applicable.

Consent for Publication

Yes.

Ethical Approval

Not Applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kumar, P., Chauhan, S. & Awasthi, L.K. Human Activity Recognition (HAR) Using Deep Learning: Review, Methodologies, Progress and Future Research Directions. Arch Computat Methods Eng 31, 179–219 (2024). https://doi.org/10.1007/s11831-023-09986-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11831-023-09986-x

Navigation