Skip to main content
Log in

Retraining a Pruned Network: A Unified Theory of Time Complexity

  • Original Research
  • Published:
SN Computer Science Aims and scope Submit manuscript

Abstract

Fine-tuning of neural network parameters is an essential step that is involved in model compression via pruning, which let the network relearn using the training data. The time needed to relearn a compressed neural network model is crucial in identifying a hardware-friendly architecture. This paper analyzes the fine-tuning or retraining step after pruning the network layer-wise and derives lower and upper bounds for the number of iterations; the network will take for retraining the pruned network till the required error of convergence. The bounds are defined with respect to the desired convergence error, optimizer parameters, amount of pruning and the number of iterations for the initial training the network. Experiments on LeNet-300-100 and LeNet-5 networks validate the bounds defined in the paper for pruning using the random connection pruning and clustered pruning approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Aghasi A, Abdi A, Nguyen N, Romberg J. Net-trim: Convex pruning of deep neural networks with performance guarantee. In: Advances in Neural Information Processing Systems. 2017;pp. 3177–3186

  2. Augasta MG, Kathirvalavakumar T. A novel pruning algorithm for optimizing feedforward neural network of classification problems. Neural processing letters. 2011;34(3):241.

    Article  Google Scholar 

  3. Babaeizadeh M, Smaragdis P, Campbell RH. A simple yet effective method to prune dense layers of neural networks. 2016.

  4. Carreira-Perpinán MA, Idelbayev Y. “learning-compression” algorithms for neural net pruning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018;pp. 8532–8541.

  5. Castellano G, Fanelli AM, Pelillo M. An iterative pruning algorithm for feedforward neural networks. IEEE transactions on Neural networks. 1997;8(3):519–31.

    Article  Google Scholar 

  6. Dong X, Chen S, Pan S. Learning to prune deep neural networks via layer-wise optimal brain surgeon. In: Advances in Neural Information Processing Systems. 2017;pp. 4857–4867.

  7. Dubey A, Chatterjee M, Ahuja N. Coreset-based neural network compression. In: Proceedings of the European Conference on Computer Vision (ECCV). 2018;pp. 454–470.

  8. Engelbrecht AP. A new pruning heuristic based on variance analysis of sensitivity information. IEEE transactions on Neural Networks. 2001;12(6):1386–99.

    Article  Google Scholar 

  9. Hagiwara M. Removal of hidden units and weights for back propagation networks. In: Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan), IEEE. 1993;vol. 1:pp. 351–354.

  10. Han S, Mao H, Dally WJ. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. 2015. arXiv preprint arXiv:1510.00149 .

  11. Huynh LN, Lee Y, Balan RK. D-pruner: Filter-based pruning method for deep convolutional neural network. In: Proceedings of the 2nd International Workshop on Embedded and Mobile Deep Learning, ACM. 2018;pp. 7–12.

  12. LeCun Y, Bottou L, Bengio Y, Haffner P, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE. 1998;86(11):2278–324.

    Article  Google Scholar 

  13. LeCun Y, Denker JS, Solla SA. Optimal brain damage. In: Advances in neural information processing systems. 1990;pp. 598–605.

  14. Li L, Xu Y, Zhu J. Filter level pruning based on similar feature extraction for convolutional neural networks. IEICE TRANSACTIONS on Information and Systems. 2018;101(4):1203–6.

    Article  Google Scholar 

  15. Srinivas S, Babu RV. Data-free parameter pruning for deep neural networks. 2015. arXiv preprint arXiv:1507.06149.

  16. Blalock Davis, Ortiz Jose Javier Gonzalez, Frankle Jonathan, Guttag John. What is the State of Neural Network Pruning? 2020. arXiv preprint arXiv:2003.03033.

  17. Frankle Jonathan, Carbin Michael. The lottery ticket hypothesis: Finding sparse, trainable neural networks. 2018. arXiv preprint arXiv:1803.03635.

  18. Renda Alex, Frankle Jonathan, Carbin Michael. Comparing Rewinding and Fine-tuning in Neural Network Pruning. 2020. arXiv preprint arXiv:2003.02389.

  19. Tung F, Mori G. Clip-q: Deep network compression learning by in-parallel pruning-quantization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018;pp. 7873–7882.

  20. Zhu M, Gupta S. To prune, or not to prune: exploring the efficacy of pruning for model compression. 2017. arXiv preprint arXiv:1710.01878.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Soumya Sara John.

Ethics declarations

Funding

Not applicable.

Conflicts of Interest

The authors declare that they have no conflict of interest.

Ethical Approval

Not applicable.

Availability of Data and Material

Not applicable.

Code Availability

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the topical collection “Computational Biology and Biomedical Informatics” guest edited by Dhruba Kr Bhattacharyya, Sushmita Mitra and Jugal Kr Kalita.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

John, S.S., Mishra, D. & Johnson, S.R. Retraining a Pruned Network: A Unified Theory of Time Complexity. SN COMPUT. SCI. 1, 203 (2020). https://doi.org/10.1007/s42979-020-00208-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42979-020-00208-w

Keywords

Navigation