Abstract
Structured pruning of networks has received increased attention recently for compressing convolutional neural networks (CNNs). Existing methods are not yet optimal, however, because there is still an obvious performance loss after pruning. This conflicts with the learning theory’s parsimony principle that prefers the simpler model, when they explain the same facts, and thus maintain a consistent level of performance. In this paper, we propose rollback learning for pruning (RLP)-an algorithm to efficiently prune networks while maintaining performance. RLP provides a generic and tailored pruning framework that is easily applied to existing CNNs. Unlike existing work, we conditionally prune the kernels, and only those reaching the performance limit (high filter sparsity) are ultimately removed. This guarantees an optimized pruning by backtracking and updating the dense filters to their full potential. Our implementation adds rollback pruning into generative adversarial learning to compute a soft mask to scale the output of the network structures by defining a new objective function. The soft mask is backtracked and duplicated before and after the batch normalization layer, leading to an optimized fit to the structure pruning. Extensive experiments demonstrate that the proposed method significantly outperforms state-of-the-art methods resulting in a \(\mathrm{{0.25}} \times \) FLOPs reduction and \(0.04\% \) accuracy increase over the full-precision ResNet-18 model on CIFAR10, and an \(\mathrm{{0.18}} \times \) FLOPs reduction and \(0.54\% \) accuracy increase over the full-precision ResNet-50 model on ImageNet.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
\(\alpha \) and the backtrack rate are functionally equivalent.
References
Denton, E., Zaremba, W., Bruna, J., Lecun, Y., Fergus, R.: Exploiting linear structure within convolutional networks for efficient evaluation. arXiv preprint arXiv:1404.0736 (2014)
Guo, Y., Yao, A., Chen, Y.: Dynamic network surgery for efficient DNNs. In: Advances In Neural Information Processing Systems, pp. 1379–1387 (2016)
Han, S., Mao, H., Dally, W.J.: Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding. Fiber 56(4), 3–7 (2015)
Han, S., Pool, J., Tran, J., Dally, W.: Learning both weights and connections for efficient neural network. In: Advances in Neural Information Processing Systems, pp. 1135–1143 (2015)
Hassibi, B., Stork, D.G.: Second order derivatives for network pruning: optimal brain surgeon. In: Advances in Neural Information Processing Systems, pp. 164–171 (1993)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of CVPR, pp. 770–778 (2016)
He, Y., Zhang, X., Sun, J.: Channel pruning for accelerating very deep neural networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1389–1397 (2017)
Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. Comput. Sci. 14(7), 38–39 (2015)
Huang, Z., Wang, N.: Data-driven sparse structure selection for deep neural networks. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11220, pp. 317–334. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01270-0_19
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)
Krogh, A., Hertz, J.A.: A simple weight decay can improve generalization. In: Advances in Neural Information Processing Systems, pp. 950–957 (1992)
Li, H., Kadav, A., Durdanovic, I., Samet, H., Graf, H.P.: Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710 (2016)
Li, Y., et al.: Exploiting kernel sparsity and entropy for interpretable CNN compression. In: Proceedings of CVPR, pp. 2800–2809 (2019)
Lin, S., Ji, R., Li, Y., Wu, Y., Huang, F., Zhang, B.: Accelerating convolutional networks via global & dynamic filter pruning. In: IJCAI, pp. 2425–2432 (2018)
Lin, S., et al.: Towards optimal structured CNN pruning via generative adversarial learning. In: Proceedings of CVPR, pp. 2790–2799 (2019)
Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., Zhang, C.: Learning efficient convolutional networks through network slimming. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2736–2744 (2017)
Luo, J.H., Wu, J., Lin, W.: ThiNet: a filter level pruning method for deep neural network compression. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 5058–5066 (2017)
Molchanov, P., Tyree, S., Karras, T., Aila, T., Kautz, J.: Pruning convolutional neural networks for resource efficient inference. arXiv preprint arXiv:1611.06440 (2016)
Rastegari, M., Ordonez, V., Redmon, J., Farhadi, A.: XNOR-Net: ImageNet classification using binary convolutional neural networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 525–542. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_32
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: MobileNetV 2: inverted residuals and linear bottlenecks. In: Proceedings of CVPR, pp. 4510–4520 (2018)
Sch, C., Laptev, I., Caputo, B.: Recognizing human actions: a local SVM approach. In: International Conference on Pattern Recognition (2004)
Yu, R., et al.: NISP: pruning networks using neuron importance score propagation. In: Proceedings of CVPR, pp. 9194–9203 (2018)
Zhang, B., Perina, A., Murino, V., Del Bue, A.: Sparse representation classification with manifold constraints transfer. In: Proceedings of CVPR, June 2015
Acknowledgements
Baochang Zhang is also with Shenzhen Academy of Aerospace Technology, Shenzhen, China, and he is the corresponding author. He is in part Supported by National Natural Science Foundation of China under Grant 61672079, Shenzhen Science and Technology Program (No.KQTD20-16112515134654). This study was supported by Grant NO. 2019JZZY011101 from the Key Research and Development Program of Shandong Province to Dianmin Sun.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Xia, X., Ding, W., Zhuo, L., Zhang, B. (2020). Tailored Pruning via Rollback Learning. In: Zhang, H., Zhang, Z., Wu, Z., Hao, T. (eds) Neural Computing for Advanced Applications. NCAA 2020. Communications in Computer and Information Science, vol 1265. Springer, Singapore. https://doi.org/10.1007/978-981-15-7670-6_11
Download citation
DOI: https://doi.org/10.1007/978-981-15-7670-6_11
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-7669-0
Online ISBN: 978-981-15-7670-6
eBook Packages: Computer ScienceComputer Science (R0)