Alternative Deep Learning Architectures for Feature-Level Fusion in Human Activity Recognition

Abstract

In this paper, we propose new deep learning architectures to fuse data provided by multiple sensors. More specifically, we combine classical features extracted from a sensor and raw data of other sensors. In order to make this data fusion possible, we exploited convolution, dense, and concatenation layers. The Mobile HEALTH dataset has been used to support our study. The results show that the proposed architectures are suitable for future use in the Human Activity Recognition (HAR) domain since their performances are comparable or better than those presented in the recent literature and the reference architectures. Indeed, we reached approximately an accuracy of 0.965 and 0.995 for the leave-one-subject-out and train-test strategies, respectively.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Notes

  1. 1.

    http://archive.ics.uci.edu/ml/datasets/mhealth+dataset

References

  1. 1.

    Aguileta AA, Brena RF, Mayora O, Molino-Minero-Re E, Trejo LA (2019) Multi-sensor fusion for activity recognition—a survey. Sensors 19(17):3808

    Article  Google Scholar 

  2. 2.

    Ahad MAR, Antar AD, Ahmed M (2020) Iot sensor-based activity recognition. IoT Sensor-based Activity Recognition. Springer

  3. 3.

    Banos O, Garcia R, Holgado-Terriza JA, Damas M, Pomares H, Rojas I, Saez A, Villalonga C (2014) mhealthdroid: a novel framework for agile development of mobile health applications. In: International workshop on ambient assisted living, Springer, pp 91–98

  4. 4.

    Dang LM, Min K, Wang H, Piran MJ, Lee CH, Moon H (2020) Sensor-based and vision-based human activity recognition: a comprehensive survey. Pattern Recogn 108:107561

    Article  Google Scholar 

  5. 5.

    Debache I, Jeantet L, Chevallier D, Bergouignan A, Sueur C (2020) A lean and performant hierarchical model for human activity recognition using body-mounted sensors. Sensors, 20(11)

  6. 6.

    Gao L, Bourke AK, Nelson J (2011) A system for activity recognition using multi-sensor fusion. In: 2011 Annual international conference of the IEEE engineering in medicine and biology society, IEEE, pp 7869–7872

  7. 7.

    Golestani N, Moghaddam M (2020) Human activity recognition using magnetic induction-based motion signals and deep recurrent neural networks. Nat Commun 11(1):1–11

    Article  Google Scholar 

  8. 8.

    Gravina R, Alinia P, Ghasemzadeh H, Fortino G (2017) Multi-sensor fusion in body sensor networks: State-of-the-art and research challenges. Inform Fusion 35:68–80

    Article  Google Scholar 

  9. 9.

    Khan AM, Tufail A, Khattak AM, Laine TH (2014) Activity recognition on smartphones via sensor-fusion and kda-based svms. Int J Distrib Sens Netw 10(5):503291

    Article  Google Scholar 

  10. 10.

    Kingma DP, Ba J (2014) Adam: A method for stochastic optimization. arXiv preprint arXiv:14126980

  11. 11.

    Mandarić K, Skočir P, Vuković M, Ježić G (2019) Anomaly detection based on fixed and wearable sensors in assisted living environments. In: 2019 International conference on software, telecommunications and computer networks (SoftCOM), IEEE, pp 1–6

  12. 12.

    Nweke HF, Teh YW, Mujtaba G, Al-Garadi MA (2019) Data fusion and multiple classifier systems for human activity detection and health monitoring: Review and open research directions. Inform Fusion 46:147–170

    Article  Google Scholar 

  13. 13.

    Organization WH, et al. (2019) World health statistics overview 2019: monitoring health for the sdgs sustainable development goals. Tech. rep., World Health Organization

  14. 14.

    Pan TY, Chang CY, Tsai WL, Hu MC (2020) Multisensor-based 3d gesture recognition for a decision-making training system. IEEE Sensors Journal

  15. 15.

    Qi J, Yang P, Newcombe L, Peng X, Yang Y, Zhao Z (2020) An overview of data fusion techniques for internet of things enabled physical activity recognition and measure. Inform Fusion 55:269–280

    Article  Google Scholar 

  16. 16.

    Qin Z, Zhang Y, Meng S, Qin Z, Choo KKR (2020) Imaging and fusing time series for wearable sensor-based human activity recognition. Inform Fusion 53:80–87

    Article  Google Scholar 

  17. 17.

    Uddin MZ, Hassan MM, Alsanad A, Savaglio C (2020) A body sensor data fusion and deep recurrent neural network-based behavior recognition approach for robust healthcare. Inform Fusion 55:105–115

    Article  Google Scholar 

Download references

Acknowledgments

The authors would like to acknowledge the financial contribution of the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Canadian Foundation for Innovation (CFI).

Author information

Affiliations

Authors

Corresponding author

Correspondence to Julien Maitre.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Maitre, J., Bouchard, K. & Gaboury, S. Alternative Deep Learning Architectures for Feature-Level Fusion in Human Activity Recognition. Mobile Netw Appl (2021). https://doi.org/10.1007/s11036-021-01741-5

Download citation

Keywords

  • Human activity recognition
  • MHEALTH dataset
  • Data fusion
  • Deep learning
  • Feature extraction