Skip to main content

RDO-Q: Extremely Fine-Grained Channel-Wise Quantization via Rate-Distortion Optimization

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13672))

Included in the following conference series:

Abstract

Allocating different bit widths to different channels and quantizing them independently bring higher quantization precision and accuracy. Most of prior works use equal bit width to quantize all layers or channels, which is sub-optimal. On the other hand, it is very challenging to explore the hyperparameter space of channel bit widths, as the search space increases exponentially with the number of channels, which could be tens of thousand in a deep neural network. In this paper, we address the problem of efficiently exploring the hyperparameter space of channel bit widths. We formulate the quantization of deep neural networks as a rate-distortion optimization problem, and present an ultra-fast algorithm to search the bit allocation of channels. Our approach has only linear time complexity and can find the optimal bit allocation within a few minutes on CPU. In addition, we provide an effective way to improve the performance on target hardware platforms. We restrict the bit rate (size) of each layer to allow as many weights and activations as possible to be stored on-chip, and incorporate hardware-aware constraints into our objective function. The hardware-aware constraints do not cause additional overhead to optimization, and have very positive impact on hardware performance. Experimental results show that our approach achieves state-of-the-art results on four deep neural networks, ResNet-18, ResNet-34, ResNet-50, and MobileNet-v2, on ImageNet. Hardware simulation results demonstrate that our approach is able to bring up to 3.5\(\times \) and 3.0\(\times \) speedups on two deep-learning accelerators, TPU and Eyeriss, respectively.

J. Lin and V. Chandrasekhar—did this work when they were with Institute for Infocomm Research, Singapore.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Banner, R., Nahshan, Y., Hoffer, E., Soudry, D.: Post-training 4-bit quantization of convolution networks for rapid-deployment. arXiv preprint. arXiv:1810.05723 (2018)

  2. Bengio, Y., Leonard, N., Courville, A.: Estimating or propagating gradients through stochastic neurons for conditional computation. In: arXiv:1308.3432 (2013)

  3. Chen, Y.H., Krishna, T., Emer, J.S., Sze, V.: Eyeriss: an energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE J. Solid-State Circ. 52(1), 127–138 (2016)

    Article  Google Scholar 

  4. Choi, J., Wang, Z., Venkataramani, S., Chuang, P.I.J., Srinivasan, V., Gopalakrishnan, K.: Pact: parameterized clipping activation for quantized neural networks. In: arXiv (2018)

    Google Scholar 

  5. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR (2009)

    Google Scholar 

  6. Dong, Z., Yao, Z., Gholami, A., Mahoney, M.W., Keutzer, K.: Hawq: Hessian aware quantization of neural networks with mixed-precision. In: arXiv (2019)

    Google Scholar 

  7. Elthakeb, A.T., Pilligundla, P., Mireshghallah, F., Yazdanbakhsh, A., Esmaeilzadeh, H.: Releq: a reinforcement learning approach for deep quantization of neural networks. In: NeurIPS Workshop on ML for Systems (2018)

    Google Scholar 

  8. Esser, S.K., McKinstry, J.L., Bablani, D., Appuswamy, R., Modha, D.: Learned step size quantization. In: ICLR (2020)

    Google Scholar 

  9. Gao, W., Liu, Y.H., Wang, C., Oh, S.: Rate distortion for model compression: from theory to practice. In: International Conference on Machine Learning, pp. 2102–2111. PMLR (2019)

    Google Scholar 

  10. Gersho, A., Gray, R.: Vector Quantization and Signal Compression. Kluwer Academic Publishers, Dordrecht (1991)

    MATH  Google Scholar 

  11. Gong, R., et al.: Differentiable soft quantization: Bridging full-precision and low-bit neural networks. In: ICCV (2019)

    Google Scholar 

  12. Han, S., Mao, H., Dally, W.J.: Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding. In: ICLR (2016)

    Google Scholar 

  13. Han, S., Pool, J., Tran, J., Dally, W.J.: Learning both weights and connections for efficient neural networks. arXiv preprint. arXiv:1506.02626 (2015)

  14. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)

    Google Scholar 

  15. Jin, Q., Yang, L., Liao, Z., Qian, X.: Neural network quantization with scale-adjusted training. BMVC (2020)

    Google Scholar 

  16. Jouppi, N., et al.: In-datacenter performance analysis of a tensor processing unit. In: Proceedings of the 44th Annual International Symposium on Computer Architecture, pp. 1–12. ISCA’17, ACM, New York, NY, USA (2017). https://doi.org/10.1145/3079856.3080246, http://doi.acm.org/10.1145/3079856.3080246

  17. Jung, S., et al.: Learning to quantize deep networks by optimizing quantization intervals with task loss. In: CVPR (2019)

    Google Scholar 

  18. Khoram, S., Li, J.: Adaptive quantization of neural networks. In: ICLR (2018)

    Google Scholar 

  19. Krizhevsky, A., Sutskever, I., Hinton, G.: ImageNet classification with deep convolutional neural networks. In: NIPS (2012)

    Google Scholar 

  20. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)

    Article  Google Scholar 

  21. Li, Y., Dong, X., Wang, W.: Additive powers-of-two quantization: an efficient non-uniform discretization for neural networks. In: ICLR (2020)

    Google Scholar 

  22. Lou, Q., Guo, F., Kim, M., Liu, L., Jiang, L.: Autoq: automated kernel-wise neural network quantizations. In: ICLR (2020)

    Google Scholar 

  23. Qu, Z., Zhou, Z., Cheng, Y., Thiele, L.: Adaptive loss-aware quantization for multi-bit networks. In: arXiv (2020)

    Google Scholar 

  24. Samajdar, A., Zhu, Y., Whatmough, P.N., Mattina, M., Krishna, T.: Scale-sim: systolic CNN accelerator. CoRR abs/1811.02883, http://arxiv.org/abs/1811.02883 (2018)

  25. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: Mobilenetv 2: inverted residuals and linear bottlenecks. In: arXiv (2018)

    Google Scholar 

  26. Shoham, Y., Gersho, A.: Efficient bit allocation for an arbitrary set of quantizers (speech coding. In: IEEE Transactions on Acoustics, Speech, and Signal Processing (1988)

    Google Scholar 

  27. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An introduction. MIT press, Cambridge (2018)

    MATH  Google Scholar 

  28. Taubman, D.S., Marcellin, M.W.: JPEG 2000 Image Compression Fundamentals, Standards and Practice. Springer, New York (2002). https://doi.org/10.1007/978-1-4615-0799-4

    Book  Google Scholar 

  29. Uhlich, S., et al.: Mixed precision dnns: all you need is a good parametrization. arXiv preprint. arXiv:1905.11452 (2019)

  30. Wang, K., Liu, Z., Lin, Y., Lin, J., Han, S.: Haq: hardware-aware automated quantization with mixed precision. In: CVPR (2019)

    Google Scholar 

  31. Wu, B., Wang, Y., Zhang, P., Tian, Y., Vajda, P., Keutzer, K.: Mixed precision quantization of convnets via differentiable neural architecture search. In: ICLR (2019)

    Google Scholar 

  32. Yang, L., Jin, Q.: Fracbits: mixed precision quantization via fractional bit-widths. arXiv preprint. arXiv:2007.02017 (2020)

  33. Zhang, D., Yang, J., Ye, D., Hua, G.: Lq-nets: learned quantization for highly accurate and compact deep neural networks. In: ECCV (2018)

    Google Scholar 

  34. Zhao, S., Yue, T., Hu, X.: Distribution-aware adaptive multi-bit quantization. In: CVPR (2021)

    Google Scholar 

  35. Zhao, X., Wang, Y., Cai, X., Liu, C., Zhang, L.: Linear symmetric quantization of neural networks for low-precision integer hardware. ICLR (2020)

    Google Scholar 

  36. Zhe, W., Lin, J., Aly, M.S., Young, S., Chandrasekhar, V., Girod, B.: Rate-distortion optimized coding for efficient cnn compression. In: DCC (2021)

    Google Scholar 

  37. Zhe, W., Lin, J., Chandrasekhar, V., Girod, B.: Optimizing the bit allocation for compression of weights and activations of deep neural networks. In: ICIP (2019)

    Google Scholar 

  38. Zhou, S., Wu, Y., Ni, Z., Zhou, X., Wen, H., Zou, Y.: Dorefa-net: training low bitwidth convolutional neural networks with low bitwidth gradients. In: arXiv preprint. arXiv:1606.06160 (2016)

  39. Zhou, Y., Moosavi-Dezfooli, S.M., Cheung, N.M., Frossard, P.: Adaptive quantization for deep neural network. In: AAAI (2018)

    Google Scholar 

  40. Zhuang, B., Liu, L., Tan, M., Shen, C., Reid, I.: Training quantized neural networks with a full-precision auxiliary module. In: CVPR (2020)

    Google Scholar 

  41. Zhuang, B., Shen, C., Tan, M., Liu, L., Reid, I.: Towards effective low-bitwidth convolutional neural networks. In: CVPR (2018)

    Google Scholar 

Download references

Acknowledgments

This research is supported by the Agency for Science, Technology and Research (A*STAR) under its Funds (Project Number A1892b0026, A19E3b0099, and C211118009). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the A*STAR.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jie Lin .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 209 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, Z., Lin, J., Geng, X., Aly, M.M.S., Chandrasekhar, V. (2022). RDO-Q: Extremely Fine-Grained Channel-Wise Quantization via Rate-Distortion Optimization. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13672. Springer, Cham. https://doi.org/10.1007/978-3-031-19775-8_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19775-8_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19774-1

  • Online ISBN: 978-3-031-19775-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics