Advertisement

Spatially Adaptive Inference with Stochastic Feature Sampling and Interpolation

Conference paper
  • 2.2k Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12346)

Abstract

In the feature maps of CNNs, there commonly exists considerable spatial redundancy that leads to much repetitive processing. Towards reducing this superfluous computation, we propose to compute features only at sparsely sampled locations, which are probabilistically chosen according to activation responses, and then densely reconstruct the feature map with an efficient interpolation procedure. With this sampling-interpolation scheme, our network avoids expending computation on spatial locations that can be effectively interpolated, while being robust to activation prediction errors through broadly distributed sampling. A technical challenge of this sampling-based approach is that the binary decision variables for representing discrete sampling locations are non-differentiable, making them incompatible with backpropagation. To circumvent this issue, we make use of a reparameterization trick based on the Gumbel-Softmax distribution, with which backpropagation can iterate these variables towards binary values. The presented network is experimentally shown to save substantial computation while maintaining accuracy over a variety of computer vision tasks.

Keywords

Sparse convolution Sparse sampling Feature interpolation 

Notes

Acknowledgment

We would like to thank Jifeng Dai for his early contribution to this work during his work at Microsoft Research Asia. Jifeng later turned to other more exciting projects.

Supplementary material

500725_1_En_31_MOESM1_ESM.pdf (260 kb)
Supplementary material 1 (pdf 260 KB)

References

  1. 1.
    Cao, S., et al.: Seernet: predicting convolutional neural network feature-map sparsity through low-bit quantization. In: CVPR (2019)Google Scholar
  2. 2.
    Chen, K., et al.: mmdetection (2018). https://github.com/open-mmlab/mmdetection
  3. 3.
    Cordts, M., et al.: The cityscapes dataset for semantic urban scene understanding. In: CVPR (2016)Google Scholar
  4. 4.
    Dai, J., et al.: Deformable convolutional networks. In: CVPR (2017)Google Scholar
  5. 5.
    Ding, X., Ding, G., Guo, Y., Han, J.: Centripetal SGD for pruning very deep convolutional networks with complicated structure. In: CVPR (2019)Google Scholar
  6. 6.
    Dong, X., Huang, J., Yang, Y., Yan, S.: More is less: a more complicated network with less inference complexity. In: CVPR (2017)Google Scholar
  7. 7.
    Figurnov, M., et al.: Spatially adaptive computation time for residual networks. In: CVPR (2017)Google Scholar
  8. 8.
    Figurnov, M., Ibraimova, A., Vetrov, D., Kohli, P.: Perforatedcnns: acceleration through elimination of redundant convolutions. In: NIPS (2016)Google Scholar
  9. 9.
    Guo, Y., Yao, A., Chen, Y.: Dynamic network surgery for efficient DNNs. In: NIPS (2016)Google Scholar
  10. 10.
    Han, S., Pool, J., Tran, J., Dally, W.J.: Learning both weights and connections for efficient neural network. In: NIPS (2015)Google Scholar
  11. 11.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)Google Scholar
  12. 12.
    He, Y., Kang, G., Dong, X., Fu, Y., Yang, Y.: Soft filter pruning for accelerating deep convolutional neural networks. arXiv preprint arXiv:1808.06866 (2018)
  13. 13.
    He, Y., Liu, P., Wang, Z., Hu, Z., Yang, Y.: Filter pruning via geometric median for deep convolutional neural networks acceleration. In: CVPR (2019)Google Scholar
  14. 14.
    Herrmann, C., Bowen, R.S., Zabih, R.: An end-to-end approach for speeding up neural network inference. arXiv preprint arXiv:1812.04180v3 (2019)
  15. 15.
    Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)
  16. 16.
    Jaderberg, M., Vedaldi, A., Zisserman, A.: Speeding up convolutional neural networks with low rank expansions. In: BMVC (2014)Google Scholar
  17. 17.
    Jang, E., Gu, S., Poole, B.: Categorical reparameterization with gumbel-softmax. In: ICLR (2017)Google Scholar
  18. 18.
    Kuen, J., et al.: Stochastic downsampling for cost-adjustable inference and improved regularization in convolutional networks. In: CVPR (2018)Google Scholar
  19. 19.
    LeCun, Y., Denker, J.S., Sol1a, S.A.: Optimal brain damage. In: NIPS (1989)Google Scholar
  20. 20.
    Li, H., Kadav, A., Durdanovic, I., Samet, H., Graf, H.P.: Pruning filters for efficient convnets. In: ICLR (2017)Google Scholar
  21. 21.
    Li, X., Liu, Z., Luo, P., Loy, C.C., Tang, X.: Not all pixels are equal: difficulty-aware semantic segmentation via deep layer cascade. In: CVPR (2017)Google Scholar
  22. 22.
    Lin, S., et al.: Towards optimal structured CNN pruning via generative adversarial learning. In: CVPR (2019)Google Scholar
  23. 23.
    Lin, T.Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: CVPR (2017)Google Scholar
  24. 24.
    Lin, T.Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10602-1_48CrossRefGoogle Scholar
  25. 25.
    Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., Zhang, C.: Learning efficient convolutional networks through network slimming. In: ICCV (2017)Google Scholar
  26. 26.
    Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: CVPR (2015)Google Scholar
  27. 27.
    Mazzini, D., Schettini, R.: Spatial sampling network for fast scene understanding. In: CVPR Workshop (2019)Google Scholar
  28. 28.
    Molchanov, P., Mallya, A., Tyree, S., Frosio, I., Kautz, J.: Importance estimation for neural network pruning. In: CVPR (2019)Google Scholar
  29. 29.
    Parashar, A., et al.: SCNN: an accelerator for compressed-sparse convolutional neural networks. In: International Symposium on Computer Architecture (2017)Google Scholar
  30. 30.
    Ren, M., Pokrovsky, A., Yang, B., Urtasun, R.: Sbnet: sparse blocks network for fast inference. In: CVPR (2018)Google Scholar
  31. 31.
    Shi, S., Chu, X.: Speeding up convolutional neural networks by exploiting the sparsity of rectifier units. arXiv preprint arXiv:1704.07724 (2017)
  32. 32.
    Szegedy, C., et al.: Going deeper with convolutions. In: CVPR (2015)Google Scholar
  33. 33.
    Veit, A., Belongie, S.: Convolutional networks with adaptive inference graphs. In: ECCV (2017)Google Scholar
  34. 34.
    Wen, W., Wu, C., Wang, Y., Chen, Y., Li, H.: Learning structured sparsity in deep neural networks. In: NIPS (2016)Google Scholar
  35. 35.
    Ye, J., Lu, X., Lin, Z., Wang, J.Z.: Rethinking the smaller-norm-less-informative assumption in channel pruning of convolution layers. In: ICLR (2018)Google Scholar
  36. 36.
    You, A., Li, X., Zhu, Z., Tong, Y.: Torchcv: a pytorch-based framework for deep learning in computer vision (2019). https://github.com/donnyyou/torchcv
  37. 37.
    Yu, R., et al.: NISP: pruning networks using neuron importance score propagation. In: CVPR (2018)Google Scholar
  38. 38.
    Zhu, X., Hu, H., Lin, S., Dai, J.: Deformable convnets v2: more deformable, better results. In: CVPR (2019)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Tsinghua UniversityBeijingChina
  2. 2.Microsoft Research AsiaBeijingChina
  3. 3.University of Science and Technology of ChinaHefeiChina
  4. 4.Department of AutomationTsinghua UniversityBeijingChina

Personalised recommendations