Skip to main content

Remove to Improve?

  • 1580 Accesses

Part of the Lecture Notes in Computer Science book series (LNIP,volume 12663)


The workhorses of CNNs are its filters, located at different layers and tuned to different features. Their responses are combined using weights obtained via network training. Training is aimed at optimal results for the entire training data, e.g., highest average classification accuracy. In this paper, we are interested in extending the current understanding of the roles played by the filters, their mutual interactions, and their relationship to classification accuracy. This is motivated by observations that the classification accuracy for some classes increases, instead of decreasing when some filters are pruned from a CNN. We are interested in experimentally addressing the following question: Under what conditions does filter pruning increase classification accuracy? We show that improvement of classification accuracy occurs for certain classes. These classes are placed during learning into a space (spanned by filter usage) populated with semantically related neighbors. The neighborhood structure of such classes is however sparse enough so that during pruning, the resulting compression bringing all classes together brings sample data closer together and thus increases the accuracy of classification.

This is a preview of subscription content, access via your institution.

Buying options

USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-030-68796-0_11
  • Chapter length: 16 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
USD   109.00
Price excludes VAT (USA)
  • ISBN: 978-3-030-68796-0
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   149.99
Price excludes VAT (USA)
Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.


  1. Anwar, S., Hwang, K., Sung, W.: Structured pruning of deep convolutional neural networks. CoRR abs/1512.08571 (2015),

  2. Carter, S., Armstrong, Z., Schubert, L., Johnson, I., Olah, C.: Activation atlas. Distill (2019)

    Google Scholar 

  3. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE CVPR, pp. 580–587 (2014).

  4. Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: Squeezenet: Alexnet-level accuracy with 50x fewer parameters and \(<\)0.5 mb model size. arXiv preprint arXiv:1602.07360 (2016)

  5. Kindermans, P.J., et al.: Learning how to explain neural networks. ArXiv e-prints (2017)

    Google Scholar 

  6. Krizhevsky, A., Sutskever, I., Hinton, G.: ImageNet classification with deep convolutional neural networks. In: NIPS (2012)

    Google Scholar 

  7. Li, H., Kadav, A., Durdanovic, I., Samet, H., Graf, H.P.: Pruning filters for efficient convnets. CoRR abs/1608.08710 (2016).

  8. Luo, J.H., Wu, J., Lin, W.: ThiNet: a filter level pruning method for deep neural network compression. In: ICCV, pp. 5058–5066 (2017)

    Google Scholar 

  9. Ma, X., Yuan, G., Lin, S., Li, Z., Sun, H., Wang, Y.: ResNet Can Be Pruned 60x: Introducing Network Purification and Unused Path Removal (P-RM) after Weight Pruning, April 2019

    Google Scholar 

  10. van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008)

    MATH  Google Scholar 

  11. Molchanov, P., Tyree, S., Karras, T., Aila, T., Kautz, J.: Pruning convolutional neural networks for resource efficient transfer learning (2016)

    Google Scholar 

  12. Morcos, A.S., Barrett, D.G.T., Rabinowitz, N.C., Botvinick, M.: On the importance of single directions for generalization arXiv:1803.06959, March 2018

  13. Najafabadi, M.M., Villanustre, F., Khoshgoftaar, T.M., Seliya, N., Wald, R., Muharemagic, E.: Deep learning applications and challenges in big data analytics. J. Big Data 2(1), 1–21 (2015).

    CrossRef  Google Scholar 

  14. Olah, C., et al.: The building blocks of interpretability. Distill (2018)

    Google Scholar 

  15. van den Oord, A., et al.: WaveNet: a generative model for raw audio. In: Arxiv (2016).

  16. Raghu, M., Gilmer, J., Yosinski, J., Sohl-Dickstein, J.: SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability (2017)

    Google Scholar 

  17. Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015).

    MathSciNet  CrossRef  Google Scholar 

  18. Suzuki, T., et al.: Spectral-pruning: compressing deep neural network via spectral analysis (2018)

    Google Scholar 

  19. Wu, Z., Palmer, M.: Verbs semantics and lexical selection, pp. 133–138, January 1994

    Google Scholar 

  20. Yosinski, J., Clune, J., Nguyen, A.M., Fuchs, T.J., Lipson, H.: Understanding neural networks through deep visualization (2015)

    Google Scholar 

  21. Zhou, B., Bau, D., Oliva, A., Torralba, A.: Interpreting deep visual representations via network dissection. TPAMI 41(9), 2131–2145 (2019)

    CrossRef  Google Scholar 

  22. Zhu, M., Gupta, S.: To prune, or not to prune: exploring the efficacy of pruning for model compression (2017).

Download references


This work was funded by the FCDRGP research grant from Nazarbayev University with reference number 240919FD3936.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Kamila Abdiyeva .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Abdiyeva, K., Lukac, M., Ahuja, N. (2021). Remove to Improve?. In: , et al. Pattern Recognition. ICPR International Workshops and Challenges. ICPR 2021. Lecture Notes in Computer Science(), vol 12663. Springer, Cham.

Download citation

  • DOI:

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-68795-3

  • Online ISBN: 978-3-030-68796-0

  • eBook Packages: Computer ScienceComputer Science (R0)