Advertisement

Understanding Sensor Data Using Deep Learning Methods on Resource-Constrained Edge Devices

  • Junzhao Du
  • Sicong Liu
  • Yuheng Wei
  • Hui Liu
  • Xin Wang
  • Kaiming Nan
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 812)

Abstract

With the development of the Internet of Things, more and more edge devices (such as smart-phones, tablets, wearable devices, embedded devices, gateway equipment and etc.) generate huge amounts of rich sensor data every day. With them, some deep learning based recognition applications provide users with various recognition services on edge devices. However, a fundamental problem these applications meet is how to perform deep learning algorithms effectively and promptly on a resource-constrained platform. Some researchers have proposed completing all computation tasks on the cloud side then returning the results back to edge devices, but such procedure is always time-consuming because of data transmission. In this case, training deep learning models on cloud side and executing the trained model directly on edge devices for inference is a better choice. Meanwhile, the deep learning based mobile applications also need to satisfy the requirements of low latency, low storage and low consumption. To fulfill above objectives, we aim to propose a new deep learning compression algorithm. We conduct comprehensive experiments to compare the proposed light-weight model with other standard state-of-the-art compression algorithms in terms of inference accuracy, process delay, CPU load, energy cost and storage coverage, based on an audio recognition system.

Keywords

Sensor data analysis Edge computing Deep learning Compression method 

Notes

Acknowledgements

This work is partially supported by the National Natural Science Foundation of China (NSFC) under Grant No.61472312 and No.61502374, the Fundamental Research Funds for the Central Universities under Grants JBZ171002, and the CETC shining Star Innovation.

References

  1. 1.
    Hofer, T., Schwinger, W., Pichler, M., Leonhartsberger, G., Altmann, J., Retschitzegger, W.: Context-awareness on mobile devices-the hydrogen approach. In: Proceedings of the 36th Annual Hawaii International Conference on System Sciences, 10-pp. (2003)Google Scholar
  2. 2.
    Shi, W., Cao, J., Zhang, Q., Li, Y., Xu, L.: Edge computing: vision and challenges. IEEE Internet Things J. 3, 637–646 (2016)CrossRefGoogle Scholar
  3. 3.
    Su, X., Tong, H., Ji, P.: Activity recognition with smartphone sensors. Tsinghua Sci. Technol. 19, 235–249 (2014)CrossRefGoogle Scholar
  4. 4.
    Wang, T., Cardone, G., Corradi, A., Torresani, L., Campbell, A.T.: WalkSafe: a pedestrian safety app for mobile phone users who walk and talk while crossing roads. In: Proceedings of the Twelfth Workshop on Mobile Computing Systems & Applications, p. 5 (2012)Google Scholar
  5. 5.
    LiKamWa, R., Zhong, L.: Starfish: efficient concurrency support for computer vision applications. In: Proceedings of 13th Annual International Conference on Mobile Systems, Applications, and Services, pp. 213–226 (2015)Google Scholar
  6. 6.
    Bo, C., Li, X.-Y., Jung, T., Mao, X., Tao, Y., Yao, L.: Smartloc: push the limit of the inertial sensor based metropolitan localization using smartphone. In: Proceedings of 19th Annual International Conference on Mobile Computing and Networking, pp. 195–198 (2013)Google Scholar
  7. 7.
    Gupta, P., Dallas, T.: Feature selection and activity recognition system using a single triaxial accelerometer. IEEE Trans. Biomed. Eng. 61, 1780–1786 (2014)CrossRefGoogle Scholar
  8. 8.
    Duchowski, A.: Eye Tracking Methodology: Theory and Practice. Springer Science & Business Media, London (2007)zbMATHGoogle Scholar
  9. 9.
    Swain, P.H., Hauska, H.: The decision tree classifier: design and potential. IEEE Trans. Geosci. Electron. 15, 142–147 (1977)CrossRefGoogle Scholar
  10. 10.
    Lane, N.D., Bhattacharya, S., Georgiev, P., Forlivesi, C., Jiao, L., Qendro, L., Kawsar, F.: Deepx: a software accelerator for low-power deep learning inference on mobile devices. In: 2016 15th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN), pp. 1–12 (2016)Google Scholar
  11. 11.
    Liu, C., Zhang, L., Liu, Z., Liu, K., Li, X., Liu, Y.: Lasagna: towards deep hierarchical understanding and searching over mobile sensing data. In: Proceedings of 22nd Annual International Conference on Mobile Computing and Networking, pp. 334–347 (2016)Google Scholar
  12. 12.
    Wikipedia. AlexNet – Wikipedia, The Free Encyclopedia (2017)Google Scholar
  13. 13.
    Wikipedia. VGG Image Annotator – Wikipedia, The Free Encyclopedia (2017)Google Scholar
  14. 14.
    Xue, J., Li, J., Gong, Y.: Restructuring of deep neural network acoustic models with singular value decomposition. In: Interspeech, pp. 2365–2369 (2013)Google Scholar
  15. 15.
    Liu, B., Wang, M., Foroosh, H., Tappen, M., Pensky, M.: Sparse convolutional neural networks. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 806–814 (2015)Google Scholar
  16. 16.
    Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., et al.: Tensorflow: large-scale machine learning on heterogeneous distributed systems. arXiv Prepr. arXiv:1603.04467 (2016)
  17. 17.
    Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the 22nd ACM international conference on Multimedia, pp. 675–678 (2014)Google Scholar
  18. 18.
    Keras: Deep Learning library for Theano and TensorFlow (2016)Google Scholar
  19. 19.
    Dally, W.J.: CNTK: an embedded language for circuit description, Department of Computer Science, California Institute of Technology, Display FileGoogle Scholar
  20. 20.
    Wikipedia. Torch (machine learning) – Wikipedia, The Free Encyclopedia (2017)Google Scholar
  21. 21.
    Chen, T., Li, M., Li, Y., Lin, M., Wang, N., Wang, M., Xiao, T., Xu, B., Zhang, C., Zhang, Z.: Mxnet: a flexible and efficient machine learning library for heterogeneous distributed systems. arXiv Prepr. arXiv:1512.01274 (2015)
  22. 22.
    Wikipedia. Theano – Wikipedia, The Free Encyclopedia (2016)Google Scholar
  23. 23.
    Lane, N.D., Georgiev, P., Qendro, L.: DeepEar: robust smartphone audio sensing in unconstrained acoustic environments using deep learning. In: Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing, pp. 283–294 (2015)Google Scholar
  24. 24.
    Han, S., Mao, H., Dally, W.J.: Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv Prepr. arXiv:1510.00149 (2015)
  25. 25.
    Lane, N., Bhattacharya, S.: Sparsifying deep learning layers for constrained resource inference on wearables. In: Proceedings of the 14th ACM Conference on Embedded Network Sensor Systems, pp. 176–189 (2016)Google Scholar
  26. 26.
    Liu, S., Du, J.: Poster: MobiEar-building an environment-independent acoustic sensing platform for the deaf using deep learning. In: Proceedings of the 14th Annual International Conference on Mobile Systems, Applications, and Services Companion, p. 50 (2016)Google Scholar
  27. 27.
    Kunz, R., Tetzlaff, R., Wolf, D.: SCNN: a universal simulator for cellular neural networks. In: 1996 Fourth IEEE International Workshop on Cellular Neural Networks and their Applications, CNNA 1996. Proceedings, pp. 255–259 (1996)Google Scholar
  28. 28.
    Tomé, D., Bondi, L., Baroffio, L., Tubaro, S., Plebani, E., Pau, D.: Reduced memory region based deep Convolutional Neural Network detection. In: 2016 IEEE 6th International Conference on Consumer Electronics (ICCE), Berlin, pp. 15–19 (2016)Google Scholar
  29. 29.
    Park, J., Li, S., Wen, W., Li, H., Chen, Y., Dubey, P.: Holistic SparseCNN: forging the trident of accuracy, speed, and size. arXiv Prepr. arXiv:1608.01409 (2016)
  30. 30.
    Han, S., Liu, X., Mao, H., Pu, J., Pedram, A., Horowitz, M.A., Dally, W.J.: EIE: efficient inference engine on compressed deep neural network. In: Proceedings of the 43rd International Symposium on Computer Architecture (2016)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  • Junzhao Du
    • 1
  • Sicong Liu
    • 2
  • Yuheng Wei
    • 2
  • Hui Liu
    • 1
  • Xin Wang
    • 2
  • Kaiming Nan
    • 2
  1. 1.School of Software and Institute of Software EngineeringXidian UniversityXi’anChina
  2. 2.School of Computer Science and TechnologyXidian UniversityXi’anChina

Personalised recommendations