Abstract
Combining data from different sources into an integrated view is a recent trend taking advantage of the Internet of Things (IoT) evolution over the last years. The fusion of different modalities has applications in various fields, including healthcare and security systems. Human activity recognition (HAR) is among the most common applications of a healthcare or eldercare system. Inertial measurement unit (IMU) wearable sensors, like accelerometers and gyroscopes, are often utilized for HAR applications. In this paper, we investigate the performance of wearable IMU sensors along with vital signs sensors for HAR. A massive feature extraction, including both time and frequency domain features and transitional features for the vital signs, along with a feature selection method were performed. The classification algorithms and different early and late fusion methods were applied to a public dataset. Experimental results revealed that both IMU and vital signs achieve reasonable HAR accuracy and F1-score among all the classes. Feature selection significantly reduced the number of features from both IMU and vital signs features while also improved the classification accuracy. The rest of the early and late level fusion methods also performed better than each modality alone, reaching an accuracy level of up to 95.32%.
Keywords
- Human activity recognition
- Wearable sensors
- Vital signals
- Sensor fusion
- Feature selection
This is a preview of subscription content, access via your institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Chen, J., Sun, Y., Sun, S.: Improving human activity recognition performance by data fusion and feature engineering. Sensors 21(3), 692 (2021)
Chen, L., Liu, X., Peng, L., Wu, M.: Deep learning based multimodal complex human activity recognition using wearable devices. Appl. Intell. 51(6), 4029–4042 (2020). https://doi.org/10.1007/s10489-020-02005-7
Cornacchia, M., Ozcan, K., Zheng, Y., Velipasalar, S.: A survey on activity detection and classification using wearable sensors. IEEE Sens. J. 17(2), 386–403 (2016)
Doewes, A., Swasono, S.E., Harjito, B.: Feature selection on human activity recognition dataset using minimum redundancy maximum relevance. In: 2017 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-TW), pp. 171–172. IEEE (2017)
Dua, N., Singh, S.N., Semwal, V.B.: Multi-input CNN-GRU based human activity recognition using wearable sensors. Computing 103(7), 1461–1478 (2021). https://doi.org/10.1007/s00607-021-00928-8
Giannakeris, P., et al.: Fusion of multimodal sensor data for effective human action recognition in the service of medical platforms. In: Lokoč, J., et al. (eds.) MMM 2021. LNCS, vol. 12573, pp. 367–378. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-67835-7_31
Kasnesis, P., Chatzigeorgiou, C., Patrikakis, C.Z., Rangoussi, M.: Modality-wise relational reasoning for one-shot sensor-based activity recognition. Pattern Recogn. Lett. 146, 90–99 (2021)
Lara, O.D., Labrador, M.A.: A survey on human activity recognition using wearable sensors. IEEE Commun. Surv. Tutor. 15(3), 1192–1209 (2012)
Lara, O.D., Pérez, A.J., Labrador, M.A., Posada, J.D.: Centinela: a human activity recognition system based on acceleration and vital sign data. Pervasive Mob. Comput. 8(5), 717–729 (2012)
Maghsoudi, Y., Alimohammadi, A., Zoej, M.V., Mojaradi, B.: Weighted combination of multiple classifiers for the classification of hyperspectral images using a genetic algorithm. In: ISPRS Commission I Symposium From Sensors to Imagery (2006)
Nweke, H.F., Teh, Y.W., Mujtaba, G., Al-Garadi, M.A.: Data fusion and multiple classifier systems for human activity detection and health monitoring: review and open research directions. Inf. Fusion 46, 147–170 (2019)
Reiss, A., Stricker, D.: Introducing a new benchmarked dataset for activity monitoring. In: 2012 16th International Symposium on Wearable Computers, pp. 108–109. IEEE (2012)
Rosati, S., Balestra, G., Knaflitz, M.: Comparison of different sets of features for human activity recognition by wearable sensors. Sensors 18(12), 4189 (2018)
Saha, J., Chowdhury, C., Biswas, S.: Two phase ensemble classifier for smartphone based human activity recognition independent of hardware configuration and usage behaviour. Microsyst. Technol. 24(6), 2737–2752 (2018). https://doi.org/10.1007/s00542-018-3802-9
Sapra, D., Pimentel, A.D.: Constrained evolutionary piecemeal training to design convolutional neural networks. In: Fujita, H., Fournier-Viger, P., Ali, M., Sasaki, J. (eds.) IEA/AIE 2020. LNCS (LNAI), vol. 12144, pp. 709–721. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-55789-8_61
Steven Eyobu, O., Han, D.S.: Feature representation and data augmentation for human activity classification based on wearable IMU sensor data using a deep LSTM neural network. Sensors 18(9), 2892 (2018)
Wan, S., Qi, L., Xu, X., Tong, C., Gu, Z.: Deep learning models for real-time human activity recognition with smartphones. Mob. Netw. Appl. 25(2), 743–755 (2020)
Wu, T., Chen, Y., Gu, Y., Wang, J., Zhang, S., Zhechen, Z.: Multi-layer cross loss model for zero-shot human activity recognition. In: Lauw, H.W., Wong, R.C.-W., Ntoulas, A., Lim, E.-P., Ng, S.-K., Pan, S.J. (eds.) PAKDD 2020. LNCS (LNAI), vol. 12084, pp. 210–221. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-47426-3_17
Xefteris, V.R., Tsanousa, A., Meditskos, G., Vrochidis, S., Kompatsiaris, I.: Performance, challenges, and limitations in multimodal fall detection systems: a review. IEEE Sens. J. 21, 18398–18409 (2021)
Zhang, M., Sawchuk, A.A.: A feature selection-based framework for human activity recognition using wearable multimodal sensors. In: BodyNets, pp. 92–98 (2011)
Zhu, J., San-Segundo, R., Pardo, J.M.: Feature extraction for robust physical activity recognition. HCIS 7(1), 1–16 (2017). https://doi.org/10.1186/s13673-017-0097-2
Acknowledgment
This research was supported by the xR4DRAMA project (grant agreement No 952133), which is funded by the European Union’s Horizon 2020 research and innovation programme and by the REA project (project code: T1EDK- 00686), co-financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship, and Innovation, under the call RESEARCH - CREATE - INNOVATE.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Xefteris, VR., Tsanousa, A., Mavropoulos, T., Meditskos, G., Vrochidis, S., Kompatsiaris, I. (2022). Human Activity Recognition with IMU and Vital Signs Feature Fusion. In: Þór Jónsson, B., et al. MultiMedia Modeling. MMM 2022. Lecture Notes in Computer Science, vol 13141. Springer, Cham. https://doi.org/10.1007/978-3-030-98358-1_23
Download citation
DOI: https://doi.org/10.1007/978-3-030-98358-1_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-98357-4
Online ISBN: 978-3-030-98358-1
eBook Packages: Computer ScienceComputer Science (R0)