Skip to main content
Log in

CSHE: network pruning by using cluster similarity and matrix eigenvalues

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

Although deep convolutional neural networks (CNNs) have achieved significant success in computer vision applications, the real-world deployment of CNNs is often limited by computing resources and memory constraints. As a mainstream deep model compression technology, neural network pruning offers a promising prospect to reduce models’ parameters and calculation. In this paper, we proposed a novel filter pruning method that combines convolution filters and feature maps information for convolutional neural network compression, namely network pruning by using cluster similarity and large eigenvalues (CSHE). First, based on the convolution operation, we explore the similarity relationship of feature maps generated by the corresponding filters. Concretely, the clustering algorithm is used to classify the similarity of filter to guide the classification of feature map. Secondly, the proposed method utilizes the information of the large eigenvalues of the feature maps to rank the importance of filters. Finally, we prune the low-ranking filters and remain the high-ranking ones. The proposed method eliminates redundancy in convolution filters by applying large eigenvalues of feature maps based on filters similarity. In this way, most of the representative information in the network can be retained and the pruned results can be easily reproduced. Experiments show that the accuracy of the pruned sparse deep network obtained by the CSHE method in the classification tasks of CIFAR-10 and ImageNet ILSVRC-12 is almost the same as that of the reference network without any additional constraints.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Lin J, Pang Y, Xia Y et al (2020) TuiGAN: learning versatile image-to-image translation with two unpaired images. In: European Conference on computer vision. Springe, pp 18–35

  2. Li X, Zhang S, Hu J et al (2021) Image-to-image translation via hierarchical style disentanglement. arXiv preprint arXiv:2103.01456

  3. Yi R, Xia M, Liu YJ et al (2020) Line drawings for face portraits from photos using global and local structure based GANs. IEEE Trans Pattern Anal Mach Intell 43(10):3462–3475

  4. Zhu C, Chen F, Ahmed U et al (2021) Semantic relation reasoning for shot-stable few-shot object detection. arXiv preprint arXiv:2103.01903

  5. Yang C, Wu Z, Zhou B et al (2021) Instance localization for self-supervised detection pretraining. arXiv preprint arXiv:2102.08318

  6. Bronskill J, Gordon J, Requeima J et al (2020) Tasknorm: rethinking batch normalization for meta-learning. In: International Conference on machine learning. PMLR, pp 1153–1164

  7. Yang S, Liu L, Xu M (2021) Free lunch for few-shot learning: distribution calibration. arXiv preprint arXiv:2101.06395

  8. Ding X, Ding G, Guo Y et al (2019) Approximated oracle filter pruning for destructive cnn width optimization. arXiv preprint arXiv:1905.04748

  9. Calvi GG, Moniri A, Mahfouzet M et al (2019) Compression and interpretability of deep neural networks via tucker tensor layer: from first principles to tensor valued back-propagation. arXiv preprint arXiv:1903.06133

  10. Shen Z, Savvides M (2020) Meal v2: boosting vanilla resnet-50 to 80%+ top-1 accuracy on imagenet without tricks. arXiv preprint arXiv:2009.08453

  11. Howard AG, Zhu M, Chen B et al (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861

  12. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556

  13. Russakovsky O, Deng J, Su H et al (2015) Imagenet large scale visual recognition challenge. Int J Comput Vis 115:211–252

    Article  MathSciNet  Google Scholar 

  14. He Y, Liu P, Wang Z et al (2019) Filter pruning via geometric median for deep convolutional neural networks acceleration. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 4340–4349

  15. Liu Z, Li J, Shen Z et al (2017) Learning efficient convolutional networks through network slimming. In: Proceedings of the IEEE International Conference on computer vision, pp 2736–2744

  16. Zhao C, Ni B, Zhang J et al (2019) Variational convolutional neural network pruning. In: Proceedings of the IEEE Conference on Computer vision and pattern recognition, pp 2780–2789

  17. Li H, Kadav A, Durdanovic I (2016) Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710

  18. Guo J, Ouyang W, Xu D (2020) Channel pruning guided by classification loss and feature importance. arXiv preprint arXiv:2003.06757

  19. Lin S, Ji R, Yan C et al (2019) Towards optimal structured cnn pruning via generative adversarial learning. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 2790–2799

  20. Huang Z, Wang N (2018) Data-driven sparse structure selection for deep neural networks. In: Proceedings of the European Conference on computer vision (ECCV), pp 304–320

  21. Pan H, Badawi D, Cetin AE (2020) Computationally efficient wildfire detection method using a deep convolutional network pruned via Fourier analysis. Sensors 20(10):2891

    Article  Google Scholar 

  22. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst 25:1097–1105

    Google Scholar 

  23. Lin M, Chen Q, Yan S (2013) Network in network. arXiv preprint arXiv:1312.4400

  24. Szegedy C, Liu W, Jia Y et al (2015) Going deeper with convolutions. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 1–9

  25. Sandler M, Howard A, Zhu M et al (2018) Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 4510–4520

  26. Han K, Wang Y, Tian Q (2020) GhostNet: more features from cheap operations. In: Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, pp 1580–1589

  27. Zhou D, Zhou X, Zhang W et al (2020) EcoNAS: finding proxies for economical neural architecture search. In: Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, pp 11396–11404

  28. Gao Y, Bai H, Jie Z (2020) Mtl-nas: task-agnostic neural architecture search towards general-purpose multi-task learning. In: Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, pp 11543–11552

  29. Li X, Lin C, Li C (2020) Improving one-shot nas by suppressing the posterior fading. In: Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, pp 13836–13845

  30. Park J, Li S, Wen W et al (2016) Faster cnns with direct sparse convolutions and guided pruning. arXiv preprint arXiv:1608.01409

  31. Han S, Liu X, Mao H (2016) EIE: efficient inference engine on compressed deep neural network. ACM SIGARCH Comput Arch News 44:243–254

    Article  Google Scholar 

  32. Golub GH, Van LCF (2013) Matrix computations, vol 3. JHU Press, Baltimore

    MATH  Google Scholar 

  33. Kanungo T, Mount DM, Netanyahu NS et al (1980) An efficient k-means clustering algorithm: analysis and implementation. IEEE Trans Pattern Anal Mach Intell 2002:881–892

    Google Scholar 

  34. Wang D, Zhou L, Zhang X et al (2018) Exploring linear relationship in feature map subspace for convnets compression. arXiv preprint arXiv:1803.05729

  35. Klema V, Laub A (1980) The singular value decomposition: Its computation and some applications. IEEE Trans Autom Control 25(2):164–176

  36. Krizhevsky A, Hinton G (2009) Learning multiple layers of features from tiny images

  37. Paszke A, Gross S, Chintala S et al (2017) Automatic differentiation in pytorch

  38. He K, Zhang X, Ren S et al (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 770–778

  39. Huang G, Liu Z, Van DM et al (2017) Densely connected convolutional networks. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 4700–4708

  40. Ayinde BO, Zurada JM (2018) Building efficient convnets using redundant feature pruning. arXiv preprint arXiv:1802.07653

  41. Molchanov P, Mallya A, Tyree S et al (2019) Importance estimation for neural network pruning. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 11264–11272

  42. Dong X, Huang J, Yang Y, Yan S (2017) More is less: a more complicated network with less inference complexity. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 5840–5848

  43. He Y, Zhang X, Sun J (2017) Channel pruning for accelerating very deep neural networks. In: Proceedings of the IEEE International Conference on computer vision, pp 1389–1397

  44. He Y, Kang G, Dong X, Fu Y, Yang Y (2018) Soft filter pruning for accelerating deep convolutional neural networks. arXiv preprint arXiv:1808.06866

  45. Luo JH, Wu J, Lin W (2017) Thinet: a filter level pruning method for deep neural network compression. In: Proceedings of the IEEE International Conference on computer vision, pp 5058–5066

  46. Lin M, Ji R, Wang Y et al (2020) Hrank: filter pruning using high-rank feature ma. In: Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, pp 1529–1538

Download references

Acknowledgements

The authors are very indebted to the anonymous referees for their critical comments and suggestions for the improvement of this paper.This work was supported by the grants from the National Natural Science Foundation of China (Nos. 61673396, 61976245, 61772344).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mingwen Shao.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shao, M., Dai, J., Wang, R. et al. CSHE: network pruning by using cluster similarity and matrix eigenvalues. Int. J. Mach. Learn. & Cyber. 13, 371–382 (2022). https://doi.org/10.1007/s13042-021-01411-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-021-01411-8

Keywords

Navigation