Skip to main content

MAFAT: Memory-Aware Fusing and Tiling of Neural Networks for Accelerated Edge Inference

  • Conference paper
  • First Online:
Designing Modern Embedded Systems: Software, Hardware, and Applications (IESS 2022)

Part of the book series: IFIP Advances in Information and Communication Technology ((IFIPAICT,volume 669))

Included in the following conference series:

Abstract

A rising research challenge is running costly machine learning (ML) networks locally on resource-constrained edge devices. ML networks with large convolutional layers can easily exceed available memory, increasing latency due to excessive OS swapping. Previous memory reduction techniques such as pruning and quantization reduce model accuracy and often require retraining. Alternatively, distributed methods partition the convolutions into equivalent smaller sub-computations, but the implementations introduce communication costs and require a network of devices. Distributed partitioning approaches can, however, also be used to run in a reduced memory footprint on a single device by subdividing the network into smaller operations. In this paper, we extend prior work on distributed partitioning into a memory-aware execution on a single device. Our approach extends prior fusing strategies to allow for multiple groups of convolutional layers that are fused and tiled independently. This enables trading off overhead versus data reuse in order to specifically reduces memory footprint. We propose a memory usage predictor coupled with a search algorithm to provide optimized fusing and tiling configurations for an arbitrary set of convolutional layers. When applied to the YOLOv2 object detection network, results show that our approach can run in less than half the memory, and with a speedup of up to 2.78 under severe memory constraints. Additionally, our algorithm will return a configuration with a latency that is within 6% of the best latency measured in a manual search.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 99.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Anwar, S., et al.: Structured pruning of deep convolutional neural networks. ACM JETC 13(3) (2017)

    Google Scholar 

  2. Cox, B., et al.: Masa: Responsive multi-DNN inference on the edge. In: PerCom (2021)

    Google Scholar 

  3. Farley, J.: MAFAT (2021). www.github.com/JacksonFarley/MAFAT

  4. Farley, J., Gerstlauer, A.: Memory-aware fusing and tiling of neural networks for accelerated edge inference. Tech. Rep. UT-CERC-21-01, UT Austin (2021)

    Google Scholar 

  5. Gong, Y., et al.: Compressing deep convolutional networks using vector quantization. ArXiv abs/1412.6115 (2014)

    Google Scholar 

  6. Li, H., et al.: Pruning filters for efficient convnets. In: ICLR (2017)

    Google Scholar 

  7. Lin, D.D., et al.: Fixed point quantization of deep convolutional networks. In: ICML (2016)

    Google Scholar 

  8. Mao, J., et al.: MoDNN: Local distributed mobile computing system for deep neural network. In: DATE (2017)

    Google Scholar 

  9. Park, J., et al.: Wireless network intelligence at the edge. Proc. IEEE 107(11), 2204–2239 (2019)

    Article  Google Scholar 

  10. Redmon, J., Farhadi, A.: Yolo9000: Better, faster, stronger (2016)

    Google Scholar 

  11. Verhoef, B., et al.: FQ-Conv: Fully quantized convolution for efficient and accurate inference. ArXiv abs/1912.09356 (2019)

    Google Scholar 

  12. Zhao, Z., et al.: DeepThings: distributed adaptive deep learning inference on resource-constrained IoT edge clusters. IEEE TCAD 37(11), 2348–2359 (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andreas Gerstlauer .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Farley, J., Gerstlauer, A. (2023). MAFAT: Memory-Aware Fusing and Tiling of Neural Networks for Accelerated Edge Inference. In: Henkler, S., Kreutz, M., Wehrmeister, M.A., Götz, M., Rettberg, A. (eds) Designing Modern Embedded Systems: Software, Hardware, and Applications. IESS 2022. IFIP Advances in Information and Communication Technology, vol 669. Springer, Cham. https://doi.org/10.1007/978-3-031-34214-1_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-34214-1_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-34213-4

  • Online ISBN: 978-3-031-34214-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics