Advertisement

Bayesian Optimization of Neural Architectures for Human Activity Recognition

  • Aomar Osmani
  • Massinissa HamidiEmail author
Chapter
Part of the Springer Series in Adaptive Environments book series (SPSADENV)

Abstract

Design of neural architectures is a critical aspect in deep-learning based methods. In this chapter, we explore the suitability of different neural architectures for the recognition of mobility-related human activities. Neural architecture search (NAS) is getting a lot of attention in the machine learning community and improves deep learning models’ performances in many tasks like language modeling and image recognition. Deep learning techniques were successfully applied to human activity recognition (HAR). However, the design of competitive architectures remains cumbersome, time-consuming, and rely strongly on domain expertise. To address this, we propose a large-scale systematic experimental setup in order to design and evaluate neural architectures for HAR applications. Specifically, we use a Bayesian optimization (BO) procedure based on a Gaussian process surrogate model in order to tune architectures’ hyper-parameters. We train and evaluate more than 600 different architectures which are then analyzed via the functional ANalysis Of VAriance (fANOVA) framework to assess hyper-parameters relevance. We experiment our approach on the Sussex-Huawei Locomotion and Transportation (SHL) dataset, a highly versatile, sensor-rich and precisely annotated dataset of human locomotion modes.

References

  1. Abadi M et al (2016) Tensorflow: a system for large-scale machine learning. OSDI 16:265–283Google Scholar
  2. Cai H, Chen T, Zhang W, Yu Y, Wang J (2018) Efficient architecture search by network transformation. AAAIGoogle Scholar
  3. Chiu C-C, Sainath TN, Wu Y, Prabhavalkar R, Nguyen P, Chen Z, Kannan A, Weiss RJ, Rao K, Gonina E et al (2018) State-of-the-art speech recognition with sequence-to-sequence models. In: 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 4774–4778Google Scholar
  4. Elsken T, Metzen JH, Hutter F (2018) Neural architecture search: a survey. arXiv:1808.05377
  5. Forman G, Scholz M (2010) Apples-to-apples in cross-validation studies: pitfalls in classifier performance measurement. ACM SIGKDD Explor Newsl 12(1):49–57CrossRefGoogle Scholar
  6. Gal Y, Ghahramani Z (2016) A theoretically grounded application of dropout in recurrent neural networks. In: Advances in neural information processing systems, pp 1019–1027Google Scholar
  7. Gjoreski H, Ciliberto M, Morales FJO, Roggen D, Mekki S, Valentin S (2017) A versatile annotated dataset for multimodal locomotion analytics with mobile devices. In: Proceedings of the 15th ACM conference on embedded network sensor systems. ACM, p 61Google Scholar
  8. Gjoreski H, Ciliberto M, Wang L, Morales FJO, Mekki S, Valentin S, Roggen D (2018) The University of Sussex-Huawei locomotion and transportation dataset for multimodal analytics with mobile devices. IEEE AccessGoogle Scholar
  9. Goodfellow I, Bengio Y, Courville A, Bengio Y (2016) Deep learning, vol 1. MIT Press, CambridgeGoogle Scholar
  10. Greff K, Srivastava RK, Koutník J, Steunebrink BR, Schmidhuber J (2017) LSTM: a search space odyssey. IEEE Trans Neural Netw Learn Syst 28(10):2222–2232MathSciNetCrossRefGoogle Scholar
  11. Hammerla NY, Halloran S, Plötz T (2016) Deep, convolutional, and recurrent models for human activity recognition using wearables. In: International joint conference on artificial intelligence, pp 1533–1540Google Scholar
  12. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778Google Scholar
  13. Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780CrossRefGoogle Scholar
  14. Hoos H, Leyton-Brown K (2014) An efficient approach for assessing hyperparameter importance. In: International conference on machine learning, pp 754–762Google Scholar
  15. Ioffe S, Szegedy C (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International conference on machine learning, PMLR, vol 37, pp 448–456Google Scholar
  16. Jozefowicz R, Zaremba W, Sutskever I (2015) An empirical exploration of recurrent network architectures. In: International conference on machine learning, pp 2342–2350Google Scholar
  17. Luong M-T, Pham H, Manning CD (2015) Effective approaches to attention-based neural machine translation. arXiv:1508.04025
  18. Ordóñez FJ, Roggen D (2016) Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition. Sensors 16(1):115CrossRefGoogle Scholar
  19. Osmani A, Hamidi M (2018) Hybrid and convolutional neural networks for locomotion recognition. In: Proceedings of the 2018 ACM UbiComp/ISWC 2018 Adjunct, Singapore, October 08–12, 2018. ACM, pp 1531–1540Google Scholar
  20. Pedregosa F et al (2011) Scikit-learn: machine learning in python. J Mach Learn Res 12:2825–2830Google Scholar
  21. Pham H, Guan M, Zoph B, Le Q, Dean J (2018) Efficient neural architecture search via parameters sharing. In: Dy J, Krause A (eds) Proceedings of the 35th international conference on machine learning, vol 80 of Proceedings of machine learning research, Stockholmsmässan, Stockholm Sweden, 10–15 Jul 2018, PMLR, pp 4095–4104Google Scholar
  22. Plötz T, Hammerla NY, Olivier P (2011) Feature learning for activity recognition in ubiquitous computing. In: International joint conference on artificial intelligence, p 1729Google Scholar
  23. Snoek J, Larochelle H, Adams RP (2012) Practical Bayesian optimization of machine learning algorithms. In: Advances in neural information processing systems, pp 2951–2959Google Scholar
  24. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1–9Google Scholar
  25. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2818–2826Google Scholar
  26. Wang Z, de Freitas N (2014) Theoretical analysis of Bayesian optimisation with unknown Gaussian process hyper-parameters. arXiv:1406.7758
  27. Yao S et al (2017) Deepsense: a unified deep learning framework for time-series mobile sensing data processing. In: International conference on world wide web, pp 351–360Google Scholar
  28. Zeng M, Nguyen LT, Yu B, Mengshoel OJ, Zhu J, Wu P, Zhang J (2014) Convolutional neural networks for human activity recognition using mobile sensors. In: 2014 6th international conference on mobile computing, applications and services (MobiCASE). IEEE, pp 197–205Google Scholar
  29. Zhong Z, Yan J, Wu W, Shao J, Liu C-L (2018) Practical block-wise neural network architecture generation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2423–2432Google Scholar
  30. Zoph B, Le QV (2016) Neural architecture search with reinforcement learning. arXiv:1611.01578
  31. Zoph B, Vasudevan V, Shlens J, Le QV (2017) Learning transferable architectures for scalable image recognition. arXiv:1707.07012, 2(6)

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Laboratoire LIPN-UMR CNRS 7030, PRES Sorbonne Paris CitéVilletaneuseFrance

Personalised recommendations