Abstract
Despite many dedicated accelerators are gaining popularity for their performance and energy efficiency in the deep neural network (DNN) domain, high-level programming support for these accelerators remains thin. In contrast to existing researches targeting the whole DNNs, we choose to dive into details and review this problem from a finer-grained level, operators. Due to performance concerns, operator programmers may have to take hand-written assembly as their first choice, which is error-prone and involves many programming chores. To alleviate this problem, we propose TOpLib, a compiler-assisted template library. By providing a unified user-view abstraction, TOpLib allows programmers to express computational kernels with high-level tensor primitives, which will be automatically lowered into low-level intrinsic primitives via expression templates. Moreover, considering memory management is performance critical and the optimization strategy of expression template is limited to enumeration based rewriting rules, we implement TOpLib with a compiler-assisted approach. We address the memory reuse challenges into the compiler, which allows TOpLib to make full use of on-chip buffers and result in better performance. Experiments over 55 typical DNN operators demonstrate that TOpLib can generate scalable code with performance faster than or on par with hand-written assembly versions.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Due to space limit, the detailed data scales are clearly listed in the anonymous github repository: https://github.com/anonymous-0x00/npc20-benchmarks.
References
AnandTech: Cambricon, Makers of Huawei’s Kirin NPU IP (2018). https://www.anandtech.com/show/12815/cambricon-makers-of-huaweis-kirin-npu-ip-build-a-big-ai-chip-and-pcie-card
Cook, S.: CUDA Programming: A Developer’s Guide to Parallel Computing with GPUs, 1st edn. Morgan Kaufmann Publishers Inc., San Francisco (2012)
Cover, T., Hart, P.: Nearest neighbor pattern classification. IEEE Trans. Inf. Theor. 13(1), 21–27 (2006)
Culberson, J.C.: Iterated greedy graph coloring and the difficulty landscape. Technical report (1992)
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR 2009 (2009)
DMLC teams: mshadow (2018). https://github.com/dmlc/mshadow
Guennebaud, G., Jacob, B., et al.: Eigen v3 (2010). http://eigen.tuxfamily.org
He, K., et al.: Deep residual learning for image recognition. CoRR abs/1512.03385 (2015)
Hearst, M.A.: Support vector machines. IEEE Intell. Syst. 13(4), 18–28 (1998)
Howard, A.G., et al.: MobileNets: efficient convolutional neural networks for mobile vision applications. CoRR abs/1704.04861 (2017)
Iandola, F.N., et al.: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and \(<\)1 MB model size. CoRR abs/1602.07360 (2016)
Jouppi, N.P., et al.: In-datacenter performance analysis of a tensor processing unit. In: ISCA 2017, pp. 1–12. ACM, New York (2017)
Krizhevsky, A., et al.: ImageNet classification with deep convolutional neural networks. In: NIPS 2012, pp. 1097–1105. Curran Associates Inc., USA (2012)
Li, L., et al.: Memory coloring: a compiler approach for scratchpad memory management. In: PACT 2005, pp. 329–338, September 2005
Liu, S., et al.: Cambricon: an instruction set architecture for neural networks. In: ISCA 2016, pp. 393–405 (2016)
Muchnick, S.S.: Advanced Compiler Design and Implementation. Morgan Kaufmann Publishers Inc., San Francisco (1998)
Munshi, A., Gaster, B., Mattson, T.G., Fung, J., Ginsburg, D.: OpenCL Programming Guide, 1st edn. Addison-Wesley Professional, Boston (2011)
NVIDIA teams: Cutlass (2017). https://github.com/NVIDIA/cutlass
Progsch, J., et al.: A new vectorization technique for expression templates in C++. CoRR abs/1109.1264 (2011)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014). https://arxiv.org/abs/1409.1556
Springer, M., Sun, Y., Masuhara, H.: Inner array inlining for structure of arrays layout. In: PLDI, ARRAY 2018, pp. 50–58. ACM, New York (2018)
Szegedy, C., et al.: Going deeper with convolutions. In: Computer Vision and Pattern Recognition (CVPR) (2015). http://arxiv.org/abs/1409.4842
Szegedy, C., et al.: Rethinking the inception architecture for computer vision. CoRR abs/1512.00567 (2015)
Wu, J., et al.: GPUCC: an open-source GPGPU compiler. In: CGO 2016, pp. 105–116 (2016)
Acknowledgement
This work is supported by the National Key R&D Program of China (under Grant No. 2017YFB1003103) and the Science Fund for Creative Research Groups of the National Natural Science Foundation of China (under Grant No. 61521092).
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 IFIP International Federation for Information Processing
About this paper
Cite this paper
Li, J. et al. (2021). Compiler-Assisted Operator Template Library for DNN Accelerators. In: He, X., Shao, E., Tan, G. (eds) Network and Parallel Computing. NPC 2020. Lecture Notes in Computer Science(), vol 12639. Springer, Cham. https://doi.org/10.1007/978-3-030-79478-1_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-79478-1_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-79477-4
Online ISBN: 978-3-030-79478-1
eBook Packages: Computer ScienceComputer Science (R0)