Abstract
A rising research challenge is running costly machine learning (ML) networks locally on resource-constrained edge devices. ML networks with large convolutional layers can easily exceed available memory, increasing latency due to excessive OS swapping. Previous memory reduction techniques such as pruning and quantization reduce model accuracy and often require retraining. Alternatively, distributed methods partition the convolutions into equivalent smaller sub-computations, but the implementations introduce communication costs and require a network of devices. Distributed partitioning approaches can, however, also be used to run in a reduced memory footprint on a single device by subdividing the network into smaller operations. In this paper, we extend prior work on distributed partitioning into a memory-aware execution on a single device. Our approach extends prior fusing strategies to allow for multiple groups of convolutional layers that are fused and tiled independently. This enables trading off overhead versus data reuse in order to specifically reduces memory footprint. We propose a memory usage predictor coupled with a search algorithm to provide optimized fusing and tiling configurations for an arbitrary set of convolutional layers. When applied to the YOLOv2 object detection network, results show that our approach can run in less than half the memory, and with a speedup of up to 2.78 under severe memory constraints. Additionally, our algorithm will return a configuration with a latency that is within 6% of the best latency measured in a manual search.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Anwar, S., et al.: Structured pruning of deep convolutional neural networks. ACM JETC 13(3) (2017)
Cox, B., et al.: Masa: Responsive multi-DNN inference on the edge. In: PerCom (2021)
Farley, J.: MAFAT (2021). www.github.com/JacksonFarley/MAFAT
Farley, J., Gerstlauer, A.: Memory-aware fusing and tiling of neural networks for accelerated edge inference. Tech. Rep. UT-CERC-21-01, UT Austin (2021)
Gong, Y., et al.: Compressing deep convolutional networks using vector quantization. ArXiv abs/1412.6115 (2014)
Li, H., et al.: Pruning filters for efficient convnets. In: ICLR (2017)
Lin, D.D., et al.: Fixed point quantization of deep convolutional networks. In: ICML (2016)
Mao, J., et al.: MoDNN: Local distributed mobile computing system for deep neural network. In: DATE (2017)
Park, J., et al.: Wireless network intelligence at the edge. Proc. IEEE 107(11), 2204–2239 (2019)
Redmon, J., Farhadi, A.: Yolo9000: Better, faster, stronger (2016)
Verhoef, B., et al.: FQ-Conv: Fully quantized convolution for efficient and accurate inference. ArXiv abs/1912.09356 (2019)
Zhao, Z., et al.: DeepThings: distributed adaptive deep learning inference on resource-constrained IoT edge clusters. IEEE TCAD 37(11), 2348–2359 (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 IFIP International Federation for Information Processing
About this paper
Cite this paper
Farley, J., Gerstlauer, A. (2023). MAFAT: Memory-Aware Fusing and Tiling of Neural Networks for Accelerated Edge Inference. In: Henkler, S., Kreutz, M., Wehrmeister, M.A., Götz, M., Rettberg, A. (eds) Designing Modern Embedded Systems: Software, Hardware, and Applications. IESS 2022. IFIP Advances in Information and Communication Technology, vol 669. Springer, Cham. https://doi.org/10.1007/978-3-031-34214-1_7
Download citation
DOI: https://doi.org/10.1007/978-3-031-34214-1_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-34213-4
Online ISBN: 978-3-031-34214-1
eBook Packages: Computer ScienceComputer Science (R0)