Skip to main content

Assessment of Deep Learning Models for Human Activity Recognition on Multi-variate Time Series Data and Non-targeted Adversarial Attack

  • Chapter
  • First Online:
Advances in Intelligent Systems Research and Innovation

Abstract

Human activity recognition (HAR) is one of the leading research fields in ubiquitous computing working to integrate seamless technologies in our daily lives. The researches in this field focus on the technological advancement of fine activity recognition through using minimal technological deployments along with consideration of human factors. The domain specific knowledge of human activity recognition has shed significant importance on the activity data processing, due to the innate random nature of the activity data. The necessity for improved prediction accuracy of activities within limited infrastructural support and cost is necessary for the research works to be conducted at large. The data set utilized in this study is modified through tree-based feature-engineering method, which has significantly impacted the intrinsic time-series pattern associated with the original data set acquired from UCI machine learning repository. In this paper, the primary focus lies on improving classification accuracy of the human activities included in the data set through utilizing deep learning classifier models. The particular modified pattern of the data devoid of time-series pattern to major extent has resulted in lower classification accuracy by benchmark time-series classification models namely Keras-LSTM and RNN-LSTM. CNN based networks have demonstrated consisted improved training performance on the data set. The state-of-the-art classifier ResNet introduced by Microsoft Research in 2015 has achieved 99.9% classification accuracy on the data set of this study. Another important aim of this study is to infer the risk attached with adversarial attack on machine learning models. The high vulnerability of machine learning models in face of adversarial attacks has emerged as security concern of the deployment pipeline of machine learning models. To the author’s best knowledge, there has been no prior study to find out the high insecurity associated with the human activity recognition data set and concerned benchmark models. In this study, two prevalent adversarial approaches namely Fast Gradient Sign Method (FGSM) and Basic Iterative Method (BIM) have been employed to carry out non-targeted attack on the pre-trained ResNet model. The accuracy loss by the adversarial attack over a small range of added perturbation is tremendous, nearly 96% accuracy is diminished in the process. Finally, we have demonstrated the comparative performance by FGSM and BIM on reducing the classification accuracy of pre-trained model. The findings shed light on the nature of the data influencing the classification accuracy and the resiliency of state-of-the-art classifier model on facing adversarial attacks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Kaiser, L., Kudlur, M., Levenberg, J., Zheng, X.: Tensorflow: Large-Scale Machine Learning on Heterogeneous Distributed Systems (2015)

    Google Scholar 

  2. Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430 (2018)

    Article  Google Scholar 

  3. Aladag, C.H., Jin, J., Li, M., Li, M., Jin, L.: Data normalization to accelerate training for linear neural net to predict tropical cyclone tracks. Math. Prob. Eng. 2015 (2015)

    Google Scholar 

  4. Athalye, A., Carlini, N., Wagner, D.A.: Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In: ICML, pp. 274–283 (2018). http://proceedings.mlr.press/v80/athalye18a.html

  5. Biggio, B., Corona, I., Maiorca, D., Nelson, B., Šrndić, N., Laskov, P., Giacinto, G., Roli, F.: Evasion attacks against machine learning at test time. In: Blockeel, H., Kersting, K., Nijssen, S., Železný, F. (eds.) Machine Learning and Knowledge Discovery in Databases, pp. 387–402. Springer, Berlin, Heidelberg (2013)

    Google Scholar 

  6. Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)

    Article  Google Scholar 

  7. Carlini, N., Wagner, D.: Adversarial examples are not easily detected: bypassing ten detection methods. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, AISec ’17, pp. 3–14. Association for Computing Machinery, New York, NY, USA (2017)

    Google Scholar 

  8. Carlini, N., Wagner, D.: Adversarial examples are not easily detected: bypassing ten detection methods. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 3–14 (2017)

    Google Scholar 

  9. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE (2017)

    Google Scholar 

  10. Chen, S., Carlini, N., Wagner, D.: Stateful detection of black-box adversarial attacks. In: Proceedings of the 1st ACM Workshop on Security and Privacy on Artificial Intelligence, SPAI ’20, pp. 30–39. Association for Computing Machinery, New York, NY, USA (2020)

    Google Scholar 

  11. Chollet, F., et al.: Keras. https://github.com/fchollet/keras (2015)

  12. Chung, S., Lim, J., Noh, K.J., Kim, G., Jeong, H.: Sensor data acquisition and multimodal sensor fusion for human activity recognition using deep learning. Sensors 19(7), 1716 (2019)

    Article  Google Scholar 

  13. Cook, D.: Learning setting-generalized activity models for smart spaces. IEEE Intell. Syst. 27(1), 32–38 (2012). https://doi.org/10.1109/MIS.2010.112

    Article  Google Scholar 

  14. Dua, D., Graff, C.: UCI Machine Learning Repository (2017). http://archive.ics.uci.edu/ml

  15. Geurts, P., Ernst, D., Wehenkel, L.: Extremely randomized trees. Mach. Learn. 63(1), 3–42 (2006)

    Article  Google Scholar 

  16. Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (2015). http://arxiv.org/abs/1412.6572

  17. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016). https://doi.org/10.1109/CVPR.2016.90

  18. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Proceedings of the 32nd International Conference on International Conference on Machine Learning, ICML’15, vol. 37, pp. 448–456. JMLR.org (2015)

    Google Scholar 

  19. Ismail Fawaz, H., Forestier, G., Weber, J., Idoumghar, L., Muller, P.A.: Adversarial attacks on deep neural networks for time series classification. In: IEEE International Joint Conference on Neural Networks (2019)

    Google Scholar 

  20. Ismail Fawaz, H., Forestier, G., Weber, J., Idoumghar, L., Muller, P.A.: Deep learning for time series classification: a review. Data Min. Knowl. Discov. 33(4), 917–963 (2019)

    Article  MathSciNet  Google Scholar 

  21. Karegowda, A.G., Manjunath, A., Jayaram, M.: Comparative study of attribute selection using gain ratio and correlation based feature selection. Int. J. Inf. Technol. Knowl. Manag. 2(2), 271–277 (2010)

    Google Scholar 

  22. Khan, R., Zhang, X., Kumar, R., Opoku Aboagye, E.: Evaluating the performance of ResNet model based on image recognition (2018). https://doi.org/10.1145/3194452.3194461

  23. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Commun. ACM 60(6), 84–90 (2017). https://doi.org/10.1145/3065386

    Article  Google Scholar 

  24. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world (2016)

    Google Scholar 

  25. Lei, Y., Wu, Z.: Time series classification based on statistical features. EURASIP J. Wirel. Commun. Netw. 2020, 1–13 (2020)

    Article  Google Scholar 

  26. Li, B., Vorobeychik, Y.: Feature cross-substitution in adversarial classification. In: Advances in Neural Information Processing Systems, pp. 2087–2095 (2014)

    Google Scholar 

  27. Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770 (2016)

  28. Lowd, D., Meek, C.: Adversarial learning. In: Proceedings of the Eleventh ACM SIGKDD International Conference on Knowledge Discovery in Data Mining, pp. 641–647 (2005)

    Google Scholar 

  29. Lu, S., Hong, Q., Wang, B., Wang, H.: Efficient ResNet model to predict protein-protein interactions with GPU computing. IEEE Access 8, 127834–127844 (2020). https://doi.org/10.1109/ACCESS.2020.3005444

    Article  Google Scholar 

  30. van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(86), 2579–2605 (2008). http://jmlr.org/papers/v9/vandermaaten08a.html

  31. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks (2017)

    Google Scholar 

  32. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)

  33. Mahmood, A., Bennamoun, M., An, S., Sohel, F.: Resfeats: residual network based features for image classification (2017). https://doi.org/10.1109/ICIP.2017.8296551

  34. Mode, G., Hoque, K.: Adversarial examples in deep learning for multivariate time series regression (2020)

    Google Scholar 

  35. Narodytska, N., Kasiviswanathan, S.: Simple black-box adversarial attacks on deep neural networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1310–1318 (2017). https://doi.org/10.1109/CVPRW.2017.172

  36. Oguntala, G.A., Abd-Alhameed, R.A., Ali, N.T., Hu, Y.F., Noras, J.M., Eya, N.N., Elfergani, I., Rodriguez, J.: SmartWall: novel RFID-enabled ambient human activity recognition using machine learning for unobtrusive health monitoring. IEEE Access 7, 68022–68033 (2019)

    Article  Google Scholar 

  37. Papernot, N., Goodfellow, I., Sheatsley, R., Feinman, R., McDaniel, P.: cleverhans v1.0.0: an adversarial machine learning library. arXiv preprint arXiv:1610.00768 (2016)

  38. Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: 2016 IEEE European Symposium on Security and Privacy (EuroS P), pp. 372–387 (2016). https://doi.org/10.1109/EuroSP.2016.36

  39. Reyes-Ortiz, J.L., Oneto, L., Samà, A., Parra, X., Anguita, D.: Transition-aware human activity recognition using smartphones. Neurocomputing 171, 754–767 (2016)

    Article  Google Scholar 

  40. Shafahi, A., Najibi, M., Ghiasi, M.A., Xu, Z., Dickerson, J., Studer, C., Davis, L.S., Taylor, G., Goldstein, T.: Adversarial training for free. In: Advances in Neural Information Processing Systems, vol. 32, pp. 3358–3369. Curran Associates, Inc. (2019). https://proceedings.neurips.cc/paper/2019/file/7503cfacd12053d309b6bed5c89de212-Paper.pdf

  41. Sherstinsky, A.: Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Phys. D: Nonlinear Phenom. 404, 132306 (2020). https://doi.org/10.1016/j.physd.2019.132306

    Article  MathSciNet  Google Scholar 

  42. Siegel, S.: Nonparametric Statistics for the Behavioral Sciences. International student edition. McGraw-Hill (1956). https://books.google.com.bd/books?id=ebfRAAAAMAAJ

  43. Sola, J., Sevilla, J.: Importance of input data normalization for the application of neural networks to complex industrial problems. IEEE Trans. Nucl. Sci. 44, 1464–1468 (1997). https://doi.org/10.1109/23.589532

    Article  Google Scholar 

  44. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. In: International Conference on Learning Representations (2014). http://arxiv.org/abs/1312.6199

  45. Tama, B.A., Rhee, K.H.: A combination of PSO-based feature selection and tree-based classifiers ensemble for intrusion detection systems. In: Advances in Computer Science and Ubiquitous Computing, pp. 489–495. Springer, Singapore (2015)

    Google Scholar 

  46. Tasmin, M., Ishtiak, T., Uddin Ruman, S., Ur Rahaman Chowdhury Suhan, A., Shihab Islam, N.M., Jahan, S., Ahmed, S., Shahnawaz Zulminan, M., Raufus Saleheen, A., Rahman, R.M.: Comparative study of classifiers on human activity recognition by different feature engineering techniques. In: 2020 IEEE 10th International Conference on Intelligent Systems (IS), pp. 93–101 (2020). https://doi.org/10.1109/IS48319.2020.9199934

  47. Pandas Development Team: pandas-dev/pandas: Pandas (2020). https://doi.org/10.5281/zenodo.3509134

  48. Tramèr, F., Kurakin, A., Papernot, N., Boneh, D., McDaniel, P.: Ensemble adversarial training: attacks and defenses (2017)

    Google Scholar 

  49. Ukil, A., Bandyopadhyay, S., Pal, A.: Sig-R2ResNet: residual network with signal processing-refined residual mapping, auto-tuned L1-regularization with modified Adam optimizer for time series classification. In: 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2020). https://doi.org/10.1109/IJCNN48605.2020.9206827

  50. Wang, X., Han, X., Liu, Z., Sun, M., Li, P.: Adversarial training for weakly supervised event detection. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 1 (Long and Short Papers), pp. 998–1008. Association for Computational Linguistics, Minneapolis, Minnesota (2019). https://doi.org/10.18653/v1/N19-1105

  51. Wang, Z., Yan, W., Oates, T.: Time series classification from scratch with deep neural networks: a strong baseline. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp. 1578–1585 (2017). https://doi.org/10.1109/IJCNN.2017.7966039

  52. Wilcoxon, F.: Individual comparisons by ranking methods. Biom. Bull. 1(6), 80–83 (1945). http://www.jstor.org/stable/3001968

  53. Yang, J., Nguyen, M.N., San, P.P., Li, X., Krishnaswamy, S.: Deep convolutional neural networks on multichannel time series for human activity recognition. In: IJCAI, vol. 15, pp. 3995–4001. Citeseer (2015)

    Google Scholar 

  54. Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: attacks and defenses for deep learning (2018)

    Google Scholar 

  55. Zeng, M., Nguyen, L.T., Yu, B., Mengshoel, O.J., Zhu, J., Wu, P., Zhang, J.: Convolutional neural networks for human activity recognition using mobile sensors. In: 6th International Conference on Mobile Computing, Applications and Services, pp. 197–205. IEEE (2014)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rashedur M. Rahman .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Tasmin, M. et al. (2022). Assessment of Deep Learning Models for Human Activity Recognition on Multi-variate Time Series Data and Non-targeted Adversarial Attack. In: Sgurev, V., Jotsov, V., Kacprzyk, J. (eds) Advances in Intelligent Systems Research and Innovation. Studies in Systems, Decision and Control, vol 379. Springer, Cham. https://doi.org/10.1007/978-3-030-78124-8_6

Download citation

Publish with us

Policies and ethics