Abstract
In this paper, we propose new deep learning architectures to fuse data provided by multiple sensors. More specifically, we combine classical features extracted from a sensor and raw data of other sensors. In order to make this data fusion possible, we exploited convolution, dense, and concatenation layers. The Mobile HEALTH dataset has been used to support our study. The results show that the proposed architectures are suitable for future use in the Human Activity Recognition (HAR) domain since their performances are comparable or better than those presented in the recent literature and the reference architectures. Indeed, we reached approximately an accuracy of 0.965 and 0.995 for the leave-one-subject-out and train-test strategies, respectively.
This is a preview of subscription content, access via your institution.









References
- 1.
Aguileta AA, Brena RF, Mayora O, Molino-Minero-Re E, Trejo LA (2019) Multi-sensor fusion for activity recognition—a survey. Sensors 19(17):3808
- 2.
Ahad MAR, Antar AD, Ahmed M (2020) Iot sensor-based activity recognition. IoT Sensor-based Activity Recognition. Springer
- 3.
Banos O, Garcia R, Holgado-Terriza JA, Damas M, Pomares H, Rojas I, Saez A, Villalonga C (2014) mhealthdroid: a novel framework for agile development of mobile health applications. In: International workshop on ambient assisted living, Springer, pp 91–98
- 4.
Dang LM, Min K, Wang H, Piran MJ, Lee CH, Moon H (2020) Sensor-based and vision-based human activity recognition: a comprehensive survey. Pattern Recogn 108:107561
- 5.
Debache I, Jeantet L, Chevallier D, Bergouignan A, Sueur C (2020) A lean and performant hierarchical model for human activity recognition using body-mounted sensors. Sensors, 20(11)
- 6.
Gao L, Bourke AK, Nelson J (2011) A system for activity recognition using multi-sensor fusion. In: 2011 Annual international conference of the IEEE engineering in medicine and biology society, IEEE, pp 7869–7872
- 7.
Golestani N, Moghaddam M (2020) Human activity recognition using magnetic induction-based motion signals and deep recurrent neural networks. Nat Commun 11(1):1–11
- 8.
Gravina R, Alinia P, Ghasemzadeh H, Fortino G (2017) Multi-sensor fusion in body sensor networks: State-of-the-art and research challenges. Inform Fusion 35:68–80
- 9.
Khan AM, Tufail A, Khattak AM, Laine TH (2014) Activity recognition on smartphones via sensor-fusion and kda-based svms. Int J Distrib Sens Netw 10(5):503291
- 10.
Kingma DP, Ba J (2014) Adam: A method for stochastic optimization. arXiv preprint arXiv:14126980
- 11.
Mandarić K, Skočir P, Vuković M, Ježić G (2019) Anomaly detection based on fixed and wearable sensors in assisted living environments. In: 2019 International conference on software, telecommunications and computer networks (SoftCOM), IEEE, pp 1–6
- 12.
Nweke HF, Teh YW, Mujtaba G, Al-Garadi MA (2019) Data fusion and multiple classifier systems for human activity detection and health monitoring: Review and open research directions. Inform Fusion 46:147–170
- 13.
Organization WH, et al. (2019) World health statistics overview 2019: monitoring health for the sdgs sustainable development goals. Tech. rep., World Health Organization
- 14.
Pan TY, Chang CY, Tsai WL, Hu MC (2020) Multisensor-based 3d gesture recognition for a decision-making training system. IEEE Sensors Journal
- 15.
Qi J, Yang P, Newcombe L, Peng X, Yang Y, Zhao Z (2020) An overview of data fusion techniques for internet of things enabled physical activity recognition and measure. Inform Fusion 55:269–280
- 16.
Qin Z, Zhang Y, Meng S, Qin Z, Choo KKR (2020) Imaging and fusing time series for wearable sensor-based human activity recognition. Inform Fusion 53:80–87
- 17.
Uddin MZ, Hassan MM, Alsanad A, Savaglio C (2020) A body sensor data fusion and deep recurrent neural network-based behavior recognition approach for robust healthcare. Inform Fusion 55:105–115
Acknowledgments
The authors would like to acknowledge the financial contribution of the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Canadian Foundation for Innovation (CFI).
Author information
Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Maitre, J., Bouchard, K. & Gaboury, S. Alternative Deep Learning Architectures for Feature-Level Fusion in Human Activity Recognition. Mobile Netw Appl (2021). https://doi.org/10.1007/s11036-021-01741-5
Accepted:
Published:
Keywords
- Human activity recognition
- MHEALTH dataset
- Data fusion
- Deep learning
- Feature extraction