Skip to main content
Log in

CorrNet: pearson correlation based pruning for efficient convolutional neural networks

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

Convolutional neural networks (CNNs) are quickly evolving, which usually results in a surge of computational cost and model size. In this article, we present a correlation-based filter pruning (CFP) approach to train more reliable CNN models. Unlike several available filter pruning methodologies, our presented approach eliminates useless filters according to the volume of information available in their related feature maps. We apply correlation to compute the duplication of information carried in the feature maps and created a feature selection scheme to obtain pruning approaches. Pruning and fine-tuning are cycled many times, producing slim and denser networks with similar accuracy to the original unpruned model. We practically calculate the success of our technique with various state-of-art CNN models on many standard datasets. Specifically, for ResNet-50 on ImageNet, our approach eliminates 44.6% filter weights and saves 51.6% Float-Point-Operations (FLOPs) with 0.5% accuracy gain and obtained state-of-art performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Hosseini B, Montagne R, Hammer B (2020) Deep-aligned convolutional neural network for skeleton-based action recognition and segmentation. Data Sci Eng 5(2):126–139

    Article  Google Scholar 

  2. He Y, Zhang X, Sun J (2017) Channel pruning for accelerating very deep neural networks. In Proceedings of the IEEE international conference on computer vision (pp. 1389-1397)

  3. Nguyen PQ, Do T, Nguyen-Thi A, Ngo TD, Le D, Nguyen TH (2016) “Clustering web video search results with convolutional neural networks,” 2016 3rd National Foundation for Science and Technology Development Conference on Information and Computer Science (NICS), pp. 135-140, https://doi.org/10.1109/NICS.2016.7725638.

  4. Dahl GE, Yu D, Deng L, Acero A (2011) Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Trans Audio Speech Lang Process 20(1):30–42

    Article  Google Scholar 

  5. Han S, Pool J, Tran J, Dall, WJ (2015) Learning both weights and connections for efficient neural networks. arXiv preprint arXiv:1506.02626

  6. Carreira-Perpinán MA, Idelbayev Y (2018) “learning-compression” algorithms for neural net pruning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 8532-8541)

  7. Liu B, Wang M, Foroosh H, Tappen M, Pensky M (2015) Sparse convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 806-814)

  8. Yu R, Li A, Chen CF, Lai JH, Morariu VI, Han X, Gao M, Lin CY, Davis LS (2018) Nisp: Pruning networks using neuron importance score propagation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 9194-9203)

  9. He Y, Ding Y, Liu P, Zhu L, Zhang H, Yang Y (2020) Learning filter pruning criteria for deep convolutional neural networks acceleration. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 2009-2018)

  10. Li H, Kadav A, Durdanovic I, Samet H, Graf HP (2016) Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710

  11. Press WH, Teukolsky SA, Flannery BP, Vetterling WT (1992) Numerical recipes in Fortran 77: volume 1, volume 1 of Fortran numerical recipes: the art of scientific computing. Cambridge University Press

    MATH  Google Scholar 

  12. Liu Z, Li J, Shen Z, Huang G, Yan S, Zhang C (2017) Learning efficient convolutional networks through network slimming. In Proceedings of the IEEE international conference on computer vision (pp. 2736-2744)

  13. Abbasi-Asl R, Yu B (2017) Structural compression of convolutional neural networks. arXiv preprint arXiv:1705.07356

  14. Han S, Mao H, Dally WJ (2015) Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149

  15. LeCun Y, Denker JS, Solla SA (1990) Optimal brain damage. In: Advances in neural information processing systems 2 (NIPS 1990), vol 2, pp 598-605

  16. Iandola FN, Han S, Moskewicz MW, Ashraf K, Dally WJ, Keutzer K (2016) SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size. arXiv preprint arXiv:1602.07360

  17. Hu H, Peng R, Tai YW, Tang CK (2016) Network trimming: A data-driven neuron pruning approach towards efficient deep architectures. arXiv preprint arXiv:1607.03250

  18. Molchanov P, Tyree S, Karras T, Aila T, Kautz J (2016) Pruning convolutional neural networks for resource efficient inference. arXiv preprint arXiv:1611.06440

  19. Jaderberg M, Dalibard V, Osindero S, Czarnecki WM, Donahue J, Razavi A, Vinyals O, Green T, Dunning I, Simonyan K, Fernando C (2017) Population based training of neural networks. arXiv preprint arXiv:1711.09846

  20. Fernando C, Banarse D, Blundell C, Zwols Y, Ha D, Rusu AA, Pritzel A, Wierstra D (2017) Pathnet: Evolution channels gradient descent in super neural networks. arXiv preprint arXiv:1701.08734

  21. Li L, Jamieson K, DeSalvo G, Rostamizadeh A, Talwalkar A (2017) Hyperband: a novel bandit-based approach to hyperparameter optimization. J Mach Learn Res 18(1):6765–6816

    MathSciNet  MATH  Google Scholar 

  22. Reed R (1993) Pruning algorithms-a survey. IEEE Trans Neural Netw 4(5):740–747

    Article  Google Scholar 

  23. Chen W, Wilson J, Tyree S, Weinberger K, Chen Y (2015) Compressing neural networks with the hashing trick. In International conference on machine learning (pp. 2285-2294). PMLR

  24. He Y, Liu P, Wang Z, Hu Z, Yang Y (2019) Filter pruning via geometric median for deep convolutional neural networks acceleration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 4340-4349)

  25. Dubey A, Chatterjee M, Ahuja N (2018) Coreset-based neural network compression. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 454-470)

  26. Tung F, Mori G (2018) Clip-q: Deep network compression learning by in-parallel pruning-quantization. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7873-7882)

  27. Miao H, Li A, Davis LS, Deshpande A (2017) Towards unified data and lifecycle management for deep learning. In 2017 IEEE 33rd International Conference on Data Engineering (ICDE) (pp. 571-582). IEEE

  28. Rastegari M, Ordonez V, Redmon J, Farhadi A (2016) Xnor-net: Imagenet classification using binary convolutional neural networks. In European conference on computer vision (pp. 525-542). Springer, Cham

  29. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778)

  30. Li Y, Wang L, Peng S, Kumar A, Yin B (2019) Using feature entropy to guide filter pruning for efficient convolutional networks. In International Conference on Artificial Neural Networks (pp. 263-274)

  31. Aketi SA, Roy S, Raghunathan A, Roy K (2020) Gradual channel pruning while training using feature relevance scores for convolutional neural networks. IEEE Access 8:171924–171932

    Article  Google Scholar 

  32. Kumar A, Shaikh AM, Li Y et al (2021) Pruning filters with L1-norm and capped L1-norm for CNN compression. Appl Intell 51:1152–1160. https://doi.org/10.1007/s10489-020-01894-y

    Article  Google Scholar 

  33. Luo JH, Wu J, Lin W (2017) Thinet: A filter level pruning method for deep neural network compression. In Proceedings of the IEEE international conference on computer vision (pp. 5058-5066)

  34. Chen T, Lin L, Zuo W, Luo X, Zhang L (2018) Learning a wavelet-like auto-encoder to accelerate deep neural networks. In Thirty-Second AAAI Conference on Artificial Intelligence

  35. Wang H, Zhang Q, Wang Y, Hu H (2017) Structured probabilistic pruning for convolutional neural network acceleration. arXiv preprint arXiv:1709.06994

  36. Singh P, Verma VK, Rai P, Namboodiri V (2020) Leveraging filter correlations for deep model compression. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 835-844)

  37. He Y, Kang G, Dong X, Fu Y, Yang Y (2018) Soft filter pruning for accelerating deep convolutional neural networks. arXiv preprint arXiv:1808.06866

  38. Ali M, Yin B, Kunar A, Sheikh AM, Bilal H (2020) “Reduction of Multiplications in Convolutional Neural Networks,” 2020 39th Chinese Control Conference (CCC), pp. 7406-7411, https://doi.org/10.23919/CCC50068.2020.9188843

  39. Li Guoqing, Zhang Meng, Wang Jiuyang, Weng Dongpeng, Corporaal Henk (2022) SCWC: structured channel weight sharing to compress convolutional neural networks. Inf Sci 587:82–96

    Article  Google Scholar 

  40. Li Guoqing, Zhang Meng, Li Jiaojie, Lv Feng, Tong Guodong (2021) Efficient densely connected convolutional neural networks. Pattern Recognit 109:107610

    Article  Google Scholar 

  41. Zhaoyi Yan, Xing Peiyin, Wang Yaowei, Tian Yonghong (2020) “Prune it yourself: Automated pruning by multiple level sensitivity.” In 2020 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), pp. 73-78. IEEE

  42. Pengtian Chen, Li Fei, Wu Chunwang (2021) “Research on intrusion detection method based on Pearson correlation coefficient feature selection algorithm.” In Journal of Physics: Conference Series, vol. 1757(1):012054. IOP Publishing

Download references

Acknowledgements

This work is supported by the National Natural Science Foundation of China under grant number 62133013 and sponsored by CAAI-Huawei MindSpore Open Fund. Chinese Academy of Sciences (CAS) and The World Academy of Sciences (TWAS) are highly acknowledged for the funds making this study possible.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Aakash Kumar.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kumar, A., Yin, B., Shaikh, A.M. et al. CorrNet: pearson correlation based pruning for efficient convolutional neural networks. Int. J. Mach. Learn. & Cyber. 13, 3773–3783 (2022). https://doi.org/10.1007/s13042-022-01624-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-022-01624-5

Keywords

Navigation