Skip to main content

Deep Convolutional Bidirectional LSTM for Complex Activity Recognition with Missing Data

Part of the Smart Innovation, Systems and Technologies book series (SIST,volume 199)

Abstract

Complex activity recognition using multiple on-body sensors is challenging due to missing samples, misaligned data times tamps across sensors, and variations in sampling rates. In this paper, we introduce a robust training pipeline that handles sampling rate variability, missing data, and misaligned data time stamps using intelligent data augmentation techniques. Specifically, we use controlled jitter in window length and add artificial misalignments in data timestamps between sensors, along with masking representations of missing data. We evaluate our pipeline on the Cooking Activity Dataset with Macro and Micro Activities, benchmarking the performance of deep convolutional bidirectional long short-term memory (DCBL) classifier. In our evaluations, DCBL achieves test accuracies of 88% and 72%, respectively, for macro- and micro-activity classifications, exceeding performance over state-of-the-art vanilla activity classifiers.

S. S. Saha and S. S. Sandha are equally contributed

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-981-15-8269-1_4
  • Chapter length: 15 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   139.00
Price excludes VAT (USA)
  • ISBN: 978-981-15-8269-1
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   179.99
Price excludes VAT (USA)
Hardcover Book
USD   179.99
Price excludes VAT (USA)
Fig. 1
Fig. 2
Fig. 3
Fig. 4

References

  1. Saha, S.S., Rahman, S., Rasna, M.J., Mahfuzul Islam, A.K.M., Rahman Ahad, M.A.: DU-MD: An open-source human action dataset for ubiquitous wearable sensors. In: 2018 Joint 7th International Conference on Informatics, Electronics & Vision (ICIEV) and 2018 2nd International Conference on Imaging, Vision & Pattern Recognition (icIVPR), Kitakyushu, Japan, 2018, pp. 567–572

    Google Scholar 

  2. Jeyakumar, J.V., Lee, E.S., Xia, Z., Sandha, S.S., Tausik, N., Srivastava, M.: Deep convolutional bidirectional LSTM based transportation mode recognition. In: Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers. 2018

    Google Scholar 

  3. Hossain, T., Inoue, S.: A Comparative study on missing data handling using machine learning for human activity recognition. In: 2019 Joint 8th International Conference on Informatics, Electronics & Vision (ICIEV) and 2019 3rd International Conference on Imaging, Vision & Pattern Recognition (icIVPR), Spokane, WA, USA, 2019, pp. 124-129

    Google Scholar 

  4. Xing, T., Roig Vilamala, M., Garcia, L., Cerutti, F., Kaplan, L., Preece, A., Srivastava, M.: DeepCEP: Deep complex event processing using distributed multimodal information. In: 2019 IEEE International Conference on Smart Computing (SMARTCOMP), Washington, DC, USA, 2019, pp. 87–92

    Google Scholar 

  5. Singh, A.D., Sandha, S.S., Garcia, L., Srivastava, M.: RadHAR: human activity recognition from point clouds generated through a millimeter-wave radar. In: Proceedings of the 3rd ACM Workshop on Millimeter-wave Networks and Sensing Systems (mmNets’19). Association for Computing Machinery, New York, NY, USA, 51–56. 2019

    Google Scholar 

  6. Lago, P., Takeda, S., Alia, S.S., Adachi, K., Benaissa, B., Charpillet, F., Inoue, S.: A Dataset for Complex Activity Recognition with Micro and Macro Activities in a Cooking Scenario Preprint, 2020

    Google Scholar 

  7. Alia, S.S. Lago, P., Takeda, S., Adachi, K., Benaissa, B., Rahman Ahad , M.A., Inoue, S.: Summary of the cooking activity recognition challenge. Human Activity Recognition Challenge, Smart Innovation, Systems and Technologies, Springer Nature (2020)

    Google Scholar 

  8. Hossain, T., Goto, H., Rahman Ahad, M.A. Inoue, S.: A study on sensor-based activity recognition having missing data. In: 2018 Joint 7th International Conference on Informatics, Electronics & Vision (ICIEV) and 2018 2nd International Conference on Imaging, Vision & Pattern Recognition (icIVPR), Kitakyushu, Japan, 2018, pp. 556–561

    Google Scholar 

  9. Wang, L., Gjoreski, H., Ciliberto, M., Lago, P., Murao, K., Okita, T., Roggen, D.: Summary of the Sussex-Huawei locomotion-transportation recognition challenge 2019. In: Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers. 2019

    Google Scholar 

  10. Che, Z., Purushotham, S., Cho, K., Sontag, D., Liu, Y.: Recurrent neural networks for multivariate time series with missing values. Sci. Rep. 8, 6085 (2018)

    CrossRef  Google Scholar 

  11. Yoon, J., Zame, W.R., van der Schaar, M.: Estimating missing data in temporal data streams using multi-directional recurrent neural networks. IEEE Trans. Biomed. Eng. 66(5), 1477–1490 (2019)

    CrossRef  Google Scholar 

  12. Ohmura, R., Uchida, R.: Exploring combinations of missing data complement for fault tolerant activity recognition. In: Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication. 2014

    Google Scholar 

  13. Jeyakumar, J.V., Lai, L., Suda, N., Srivastava, M.: SenseHAR: a robust virtual activity sensor for smartphones and wearables. In: Proceedings of the 17th Conference on Embedded Networked Sensor Systems (SenSys). 2019

    Google Scholar 

  14. Xing, T., Sandha, S.S., Balaji, B., Chakraborty, S., Srivastava, M.: Enabling edge devices that learn from each other: Cross modal training for activity recognition. In: Proceedings of the 1st International Workshop on Edge Systems, Analytics and Networking, pp. 37–42. 2018

    Google Scholar 

  15. Sandha, S.S., Noor, J., Anwar, F., Srivastava, M.: Time awareness in deep learning-based multimodal fusion across smartphone platforms. In: 5th ACM/IEEE Conference on Internet of Things Design and Implementation (IoTDI). 2020

    Google Scholar 

  16. Stisen, A., Blunck, H., Bhattacharya, S., Prentow, T.S., Kjærgaard, M.B., Dey, A., Sonne, T., Jensen, M.M.: Smart devices are different: Assessing and mitigating mobile sensing heterogeneities for activity recognition. In: Proceedings of the 13th ACM Conference on Embedded Networked Sensor Systems (SenSys), pp. 127–140. 2015

    Google Scholar 

  17. Ordóñez, F.J., Roggen, D.: Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition. Sensors 16(1), 115 (2016)

    CrossRef  Google Scholar 

  18. Ha, S., Yun, J., Choi, S.: Multi-modal convolutional neural networks for activity recognition. In: 2015 IEEE International Conference on Systems, Man, and Cybernetics, Kowloon, 2015, pp. 3017–3022

    Google Scholar 

  19. Panwar, M., Dyuthi, S.R., Prakash, K.C., Biswas, D., Acharyya, A., Maharatna, K., Gautam, A., Naik, G.R.: CNN based approach for activity recognition using a wrist-worn accelerometer. In: 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). Seogwipo 2017, 2438–2441 (2017)

    Google Scholar 

  20. Hammerla, N.Y., Halloran, S., Plötz, T.: Deep, convolutional, and recurrent models for human activity recognition using wearables. In: Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI’16). AAAI Press, 1533–1540. 2016

    Google Scholar 

  21. Zeng, M., Nguyen, L.T., Yu, B., Mengshoel, O.J., Zhu, J., Wu, P., Zhang, J.: Convolutional Neural Networks for human activity recognition using mobile sensors, 6th International Conference on Mobile Computing, pp. 197–205. Austin, TX, Applications and Services (2014)

    Google Scholar 

  22. Graves, A., Schmidhuber, J.: Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Netw. 18(5–6), 602–610 (2005)

    CrossRef  Google Scholar 

  23. Lewkowicz, D., Delevoye-Turrell, Y.: Real-Time Motion Capture Toolbox (RTMocap): an open-source code for recording 3-D motion kinematics to study action-effect anticipations during motor and social interactions. Behav. Res. Methods 48(1), 366–380 (2016)

    CrossRef  Google Scholar 

  24. Yoon, J., Jordon, J., van der Schaar, M.: GAIN: missing data imputation using generative adversarial nets. In: Proceedings of the 35th International Conference on Machine Learning, in PMLR 80, 5689–5698 (2018)

    Google Scholar 

  25. Li, S.C.X., Jiang, B., Marlin, B.M.: MisGAN: learning from incomplete data with generative adversarial networks. In: 7th International Conference on Learning Representations (ICLR). 2019

    Google Scholar 

  26. Alzantot, M., Chakraborty, S., Srivastava, M.: SenseGen: A deep learning architecture for synthetic sensor data generation. In: 2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), Kona, HI, 2017, pp. 188–193

    Google Scholar 

  27. Aggarwal, J.K., Xia, L.: Human activity recognition from 3d data: A review. Pattern Recogn. Lett. 48, 70–80 (2014)

    CrossRef  Google Scholar 

  28. Saha, S.S., Rahman, S., Haque, Z.R.R., Hossain, T., Inoue, S., Rahman Ahad, M.A.: Position independent activity recognition using shallow neural architecture and empirical modeling. In: Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers (UbiComp/ISWC’19 Adjunct). Association for Computing Machinery, New York, NY, USA, 808–813. 2019

    Google Scholar 

Download references

Acknowledgements

The research reported in this paper was sponsored in part by the CONIX Research Center, one of six centers in JUMP, a Semiconductor Research Corporation (SRC) program sponsored by DARPA, the Army Research Laboratory (ARL) under Cooperative Agreement W911NF-17-2-0196 and the King Abdullah University of Science and Technology (KAUST) through its Sensor Innovation research program. Any findings in this material are those of the author(s) and do not reflect the views of any of the above funding agencies. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Swapnil Sayan Saha or Sandeep Singh Sandha .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2021 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Verify currency and authenticity via CrossMark

Cite this chapter

Saha, S.S., Sandha, S.S., Srivastava, M. (2021). Deep Convolutional Bidirectional LSTM for Complex Activity Recognition with Missing Data. In: Ahad, M.A.R., Lago, P., Inoue, S. (eds) Human Activity Recognition Challenge. Smart Innovation, Systems and Technologies, vol 199. Springer, Singapore. https://doi.org/10.1007/978-981-15-8269-1_4

Download citation