Advertisement

Efficient Residue Number System Based Winograd Convolution

Conference paper
  • 577 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12364)

Abstract

Prior research has shown that Winograd algorithm can reduce the computational complexity of convolutional neural networks (CNN) with weights and activations represented in floating point. However it is difficult to apply the scheme to the inference of low-precision quantized (e.g. INT8) networks. Our work extends the Winograd algorithm to Residue Number System (RNS). The minimal complexity convolution is computed precisely over large transformation tile (e.g. \(10 \times 10\) to \(16 \times 16\)) of filters and activation patches using the Winograd transformation and low cost (e.g. 8-bit) arithmetic without degrading the prediction accuracy of the networks during inference. The arithmetic complexity reduction is up to \(7.03\times \) while the performance improvement is up to \(2.30\times \) to \(4.69\times \) for \(3\times 3\) and \(5 \times 5\) filters respectively.

Supplementary material

504475_1_En_4_MOESM1_ESM.pdf (219 kb)
Supplementary material 1 (pdf 219 KB)

References

  1. 1.
    Barabasz, B., Anderson, A., Soodhalter, K.M., Gregg, D.: Error Analysis and Improving the Accuracy of Winograd Convolution for Deep Neural Networks. arXiv e-prints arXiv:1803.10986 (2018)
  2. 2.
    Courbariaux, M., Bengio, Y.: Binarynet: Training deep neural networks with weights and activations constrained to +1 or –1. CoRR abs/1602.02830 (2016)Google Scholar
  3. 3.
  4. 4.
    Knuth, D.E.: The Art of Computer Programming, vol. 1: Fundamental Algorithms. §1.2.3: Sums and Products: Exercise 40, 3rd ed. (1997)Google Scholar
  5. 5.
    Knuth, D.E.: The Art of Computer Programming, vol. 2: Seminumerical Algorithms. Section 4.3.2, 3rd ed., pp. 286–291, exercise 4.6.2-3, p. 456. Addison-Wesley (2001)Google Scholar
  6. 6.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deepconvolutional neural networks. Commun. ACM 60(6), 84–90 (2017).  https://doi.org/10.1145/3065386, http://doi.acm.org/10.1145/3065386
  7. 7.
    Lavin, A., Gray, S.: Fast algorithms for convolutional neural networks. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)Google Scholar
  8. 8.
    Liu, X., Pool, J., Han, S., Dally, W.J.: Efficient sparse-winograd convolutional neural networks. In: International Conference on Learning Representations (2018). https://openreview.net/forum?id=HJzgZ3JCW
  9. 9.
    Liu, Z.G., Mattina, M.: Learning low-precision neural networks without straight-through estimator (ste). In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pp. 3066–3072. International Joint Conferences on Artificial Intelligence Organization (2019).  https://doi.org/10.24963/ijcai.2019/425
  10. 10.
    Mathieu, M., Henaff, M., LeCun, Y.: Fast training of convolutional networks through ffts. In: Bengio, Y., LeCun, Y. (eds.) 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, 14–16 April 2014. Conference Track Proceedings (2014)Google Scholar
  11. 11.
    Meng, L., Brothers, J.: Efficient Winograd Convolution via Integer Arithmetic. arXiv e-prints arXiv:1901.01965 (2019)
  12. 12.
    Mohan, P.V.: Residue Number Systems: Algorithms and Architectures. Kluwer Academic Publishers, New York (2002)CrossRefGoogle Scholar
  13. 13.
    Schonheim, J.: Conversion of modular numbers to their mixed radix representation by matrix formula. Math. Comput. 21, 253–257 (1967)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Winograd, S.: Arithmetic Complexity of Computations, vol. 33. SIAM, Philadelphia (1980)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Arm ML Research LabBostonUSA

Personalised recommendations