Abstract
Machine Learning (ML) has become state of the art for various tasks, including classification of accelerometer data. In the world of Internet of Things (IoT), the available hardware with low-power consumption is often microcontrollers. However, one of the challenges for embedding machine learning on microcontrollers is that the available memory space is very limited, and this memory is also occupied by the rest of the software elements needed in the IoT device. The problem is then to design ML architectures that have a very low memory footprint, while maintaining a low error rate. In this paper, a methodology is proposed towards the deployment of efficient machine learning on microcontrollers. Then, such methodology is used to investigate the effect of using compression techniques mainly pruning, quantization, and coding on the memory budget. Indeed, we know that these techniques reduce the model size, but not how these techniques interoperate to reach the best accuracy to memory trade-off. A Convolutional Neural Network (CNN) and a Human Activity Recognition (HAR) application has been adopted for the validation of the study .
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Shafique, M., Theocharides, T., Reddy, V.J., Murmann, B.: TinyML: current progress, research challenges, and future roadmap. In: Proceedings - Design Automation Conference, vol. 2021-December, pp. 1303–1306. Institute of Electrical and Electronics Engineers Inc., December 2021. ISBN: 9781665432740
Sakr, F., Bellotti, F., Berta, R., Gloria, A.D., Doyle, J.: Memory-efficient CMSIS-NN with replacement strategy. In: Proceedings - 2021 International Conference on Future Internet of Things and Cloud, FiCloud 2021, pp. 299–303. Institute of Electrical and Electronics Engineers Inc., August 2021. ISBN: 9781665425742
Banner, R., Hubara, I., Hoffer, E., Soudry, D.: Scalable methods for 8-bit training of neural networks. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 31. Curran Associates Inc. (2018)
Choi, Y., Choi, J., El-Khamy, M., Lee, J.: Data-free network quantization with adversarial knowledge distillation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2020
Banner, R., Nahshan, Y., Soudry, D.: Post training 4-bit quantization of convolutional networks for rapid-deployment. In: Wallach, H., Larochelle, H., Beygelzimer, A., Alché-Buc, F. D., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32. Curran Associates Inc. (2019)
Cai, Y., Yao, Z., Dong, Z., Gholami, A., Mahoney, M.W., Keutzer, K.: ZeroQ: a novel zero shot quantization framework. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020
Gholami, A., Kim, S., Dong, Z., Yao, Z., Mahoney, M.W., Keutzer, K.: A survey of quantization methods for efficient neural network inference. arXiv Version Number: 3 (2021)
David, R., et al.: TensorFlow Lite Micro: Embedded Machine Learning on TinyML Systems, October 2020
Geiger, L., Team, P.: Larq: an open-source library for training binarized neural networks. J. Open Source Softw. 5, 1746 (2020)
Lai, L., Suda, N., Chandra, V.: CMSIS-NN: efficient neural network kernels for Arm Cortex-M CPUs. ArXiv arXiv:1801.06601 (2018)
Capotondi, A., Rusci, M., Fariselli, M., Benini, L.: CMix-NN: mixed low-precision CNN library for memory-constrained edge devices. IEEE Trans. Circuits Syst. II Express Briefs 67, 871–875 (2020)
Blalock, D., Gonzalez Ortiz, J.J., Frankle, J., Guttag, J.: What is the state of neural network pruning? In: Dhillon, I., Papailiopoulos, D., Sze, V. (eds.) Proceedings of Machine Learning and Systems, vol. 2, pp. 129–146 (2020)
Molchanov, D., Ashukha, A., Vetrov, D.: Variational dropout sparsifies deep neural networks. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning, vol. 70 of Proceedings of Machine Learning Research, pp. 2498–2507. PMLR, August 2017
Tessier, H., Gripon, V., Léonardon, M., Arzel, M., Hannagan, T., Bertrand, D.: Rethinking weight decay for efficient neural network pruning. J. Imaging 8, 64 (2022)
Huffman, D.: A method for the construction of minimum-redundancy codes. Proc. IRE 40, 1098–1101 (1952)
Han, S., Mao, H., Dally, W.J.: Deep compression: compressing deep neural network with pruning, trained quantization and Huffman coding. In: Bengio, Y., LeCun, Y. (eds.) 4th International Conference on Learning Representations, ICLR 2016, Conference Track Proceedings, San Juan, Puerto Rico, 2–4 May 2016
Gajjala, R.R., Banchhor, S., Abdelmoniem, A.M., Dutta, A., Canini, M., Kalnis, P.: Huffman coding based encoding techniques for fast distributed deep learning. In: Proceedings of the 1st Workshop on Distributed Machine Learning, Barcelona, Spain, pp. 21–27. ACM, December 2020
Pappalardo, A.: Xilinx/brevitas (2021)
Courbariaux, M., Bengio, Y., David, J.-P.: BinaryConnect: training deep neural networks with binary weights during propagations. In: Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2, NIPS 2015, Montreal, Canada, Cambridge, MA, USA, pp. 3123–3131. MIT Press (2015)
Kyriakides, G., Margaritis, K.: An introduction to neural architecture search for convolutional networks. arXiv preprint arXiv:2005.11074 (2020)
Anguita, D., Ghio, A., Oneto, L., Parra Perez, X., Reyes Ortiz, J.L.: A public domain dataset for human activity recognition using smartphones. In: Proceedings of the 21th International European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, Bruges, pp. 437–442 (2013)
Coelho, Y.L., Santos, F.A.S., Frizera-Neto, A., Bastos-Filho, T.F.: A lightweight framework for human activity recognition on wearable devices. IEEE Sens. J. 21, 24471–24481 (2021)
Chetty, G., White, M., Akther, F.: Smart phone based data mining for human activity recognition. Procedia Comput. Sci. 46, 1181–1187 (2015)
Novac, P.E., Hacene, G.B., Pegatoquet, A., Miramond, B., Gripon, V.: Quantization and deployment of deep neural networks on microcontrollers. Sensors 21, 2984 (2021)
Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: beyond empirical risk minimization. arXiv Version Number: 2 (2017)
Wu, H., Judd, P., Zhang, X., Isaev, M., Micikevicius, P.: Integer quantization for deep learning inference: principles and empirical evaluation. arXiv Version Number: 1 (2020)
Lin, J., Chen, W.-M., Cai, H., Gan, C., Han, S.: MCUNetV2: memory-efficient patch-based inference for tiny deep learning, October 2021
Acknowledgement
This work has been partly supported by the grant ADEME PERFECTO 2021. The authors would like to thank the GoodFloow company for the financial support and technical guidance to this project.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Younes, H., Blevec, H.L., Léonardon, M., Gripon, V. (2023). Inter-Operability of Compression Techniques for Efficient Deployment of CNNs on Microcontrollers. In: Valle, M., et al. Advances in System-Integrated Intelligence. SYSINT 2022. Lecture Notes in Networks and Systems, vol 546. Springer, Cham. https://doi.org/10.1007/978-3-031-16281-7_51
Download citation
DOI: https://doi.org/10.1007/978-3-031-16281-7_51
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-16280-0
Online ISBN: 978-3-031-16281-7
eBook Packages: EngineeringEngineering (R0)