Skip to main content

Domain Generalization on Constrained Platforms: On the Compatibility with Pruning Techniques

  • Conference paper
  • First Online:
Internet of Things (GIoTS 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13533))

Included in the following conference series:

Abstract

The wide deployment of Machine Learning models is an essential evolution of Artificial Intelligence, predominantly by porting deep neural networks in constrained hardware platforms such as 32 bits microcontrollers. For many IoT applications, the deployment of such complex models is hindered by two major issues that are usually handled separately. For supervised tasks, training a model requires a large quantity of labelled data which is expensive to collect or even intractable in many real-world applications. Furthermore, the inference process implies memory, computing and energy capacities that are not suitable for typical IoT platforms. We jointly tackle these issues by investigating the efficiency of model pruning techniques under the scope of the single domain generalization problem. Our experiments show that a pruned neural network retains the benefit of the training with single domain generalization algorithms despite a larger impact of pruning on its performance. We emphasize the importance of the pruning method, more particularly between structured and unstructured pruning as well as the benefit of data-agnostic heuristics that preserve their properties in the single domain generalization setting.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://gitlab.emse.fr/b.nguyen/randconvpruning.

  2. 2.

    https://www.tensorflow.org/lite/microcontrollers.

  3. 3.

    https://www.st.com/en/embedded-software/x-cube-ai.html.

  4. 4.

    Setups are detailed in https://gitlab.emse.fr/b.nguyen/randconvpruning.

References

  1. Chang, Y., Mathur, A., Isopoussu, A., Song, J., Kawsar, F.: A systematic study of unsupervised domain adaptation for robust human-activity recognition. Proc. ACM Interact. Mobile Wearable Ubiquit. Technol. 4(1), 1–3 (2020)

    Google Scholar 

  2. Frankle, J., Carbin, M.: The lottery ticket hypothesis: finding sparse, trainable neural networks. In: International Conference on Learning Representations (2019)

    Google Scholar 

  3. Han, S., Pool, J., Tran, J., Dally, W.: Learning both weights and connections for efficient neural network. Adv. Neural Inf. Proc. Syst. 1, 1135–1143 (2015)

    Google Scholar 

  4. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the Conference on Computer Vision and Pattern Recognition (2016)

    Google Scholar 

  5. He, Y., Liu, P., Wang, Z., Hu, Z., Yang, Y.: Filter pruning via geometric median for deep convolutional neural networks acceleration. In: Proceedings of the Conference on Computer Vision and Pattern Recognition (2019)

    Google Scholar 

  6. Howard, A.G., et al.: MobileNets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)

  7. Hull, J.J.: A database for handwritten text recognition research. IEEE Trans. Pattern Anal. Mach. Intell. 16(5), 550–554 (1994)

    Google Scholar 

  8. Ismail Fawaz, H., et al.: InceptionTime: finding AlexNet for time series classification. Data Min. Knowl. Disc. 34, 1–27 (2020)

    Google Scholar 

  9. LeCun, Y., et al.: Backpropagation applied to handwritten zip code recognition. Neural Comput. 1(4), 541–551 (1989)

    Google Scholar 

  10. Lee, N., Ajanthan, T., Torr, P.: Snip: single-shot network pruning based on connection sensitivity. In: International Conference on Learning Representations (2018)

    Google Scholar 

  11. Li, H., Kadav, A., Durdanovic, I., Samet, H., Graf, H.P.: Pruning filters for efficient convnets. In: International Conference on Learning Representations (2017)

    Google Scholar 

  12. Munappy, A., Bosch, J., Olsson, H.H., Arpteg, A., Brinne, B.: Data management challenges for deep learning. In: 2019 45th Euromicro Conference on Software Engineering and Advanced Applications (SEAA). IEEE (2019)

    Google Scholar 

  13. Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., Ng, A.Y.: Reading digits in natural images with unsupervised feature learning. NIPS (2011)

    Google Scholar 

  14. Qiao, F., Zhao, L., Peng, X.: Learning to learn single domain generalization. In: Proceedings of the Conference on Computer Vision and Pattern Recognition (2020)

    Google Scholar 

  15. Renda, A., Frankle, J., Carbin, M.: Comparing rewinding and fine-tuning in neural network pruning. In: International Conference on Learning Representations (2020)

    Google Scholar 

  16. Sztyler, T., Stuckenschmidt, H.: On-body localization of wearable devices: an investigation of position-aware activity recognition. In: 2016 IEEE International Conference on Pervasive Computing and Communications (PerCom). IEEE (2016)

    Google Scholar 

  17. Tanaka, H., Kunin, D., Yamins, D.L., Ganguli, S.: Pruning neural networks without any data by iteratively conserving synaptic flow. Adv. Neural Inf. Proc. Syst. 33, 6377–6389 (2020)

    Google Scholar 

  18. Thodberg, H.H.: Improving generalization of neural networks through pruning. Int. J. Neural Syst. 1(4), 317–326 (1991)

    Google Scholar 

  19. Trommer, E., Waschneck, B., Kumar, A.: dCSR: a memory-efficient sparse matrix representation for parallel neural network inference. In: 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD). IEEE (2021)

    Google Scholar 

  20. Verdenius, S., Stol, M., Forré, P.: Pruning via iterative ranking of sensitivity statistics. arXiv preprint arXiv:2006.00896 (2020)

  21. Volpi, R., Murino, V.: Addressing model vulnerability to distributional shifts over image transformation sets. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (2019)

    Google Scholar 

  22. Wang, X., Han, Y., Leung, V.C., Niyato, D., Yan, X., Chen, X.: Convergence of edge computing and deep learning: a comprehensive survey. IEEE Commun. Surv. Tutorials 22, 869–904 (2020)

    Google Scholar 

  23. Xu, Z., Liu, D., Yang, J., Raffel, C., Niethammer, M.: Robust and generalizable visual representation learning via random convolutions. In: International Conference on Learning Representations (2021)

    Google Scholar 

Download references

Acknowledgments

This work benefited from the French Jean Zay supercomputer thanks to the AI dynamic access program. This collaborative work is partially supported by the IPCEI on Microelectronics and Nano2022 actions and by the European project InSecTT (www.insectt.eu: ECSEL Joint Undertaking (876038). The JU receives support from the European Union’s H2020 program and Au, Sw, Sp, It, Fr, Po, Ir, Fi, Sl, Po, Nl, Tu. The document reflects only the author’s view and the Commission is not responsible for any use that may be made of the information it contains.) and by the French National Research Agency (ANR) in the framework of the Investissements d’Avenir program (ANR-10-AIRT-05, irtnanoelec).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Baptiste Nguyen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Nguyen, B., Moëllic, PA., Blayac, S. (2022). Domain Generalization on Constrained Platforms: On the Compatibility with Pruning Techniques. In: González-Vidal, A., Mohamed Abdelgawad, A., Sabir, E., Ziegler, S., Ladid, L. (eds) Internet of Things. GIoTS 2022. Lecture Notes in Computer Science, vol 13533. Springer, Cham. https://doi.org/10.1007/978-3-031-20936-9_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20936-9_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20935-2

  • Online ISBN: 978-3-031-20936-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics