Skip to main content
Log in

DECACNN: differential evolution-based approach to compress and accelerate the convolution neural network model

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

In this research work, a differential evolution-based method has been used to compress the deep neural network architectures. The compression is achieved by selecting the most dominant filters/nodes during the training of the model. The usefulness of filters is established by the test accuracy of the model. The experimental results demonstrate that the performance of proposed model compares fairly well with other state of art model compression techniques. Moreover, the compression achieved with VGG16 on MNIST, CIFAR-10 and CIFAR-100 datasets was 98.32, 98.5 and 93.54%, respectively. The corresponding compression achieved on ResNet50 was 85.24, 85.38 and 79.37%, while SqueezeNet which is already compressed model could also be compressed by 72.94, 73.77 and 44.59%, respectively. MobileNet, which is already a compact model developed for mobile applications, could also be compressed by 93.04, 93.74 and 76.37% on MNIST, CIFAR-10 and CIFAR-100 datasets. The loss in accuracy in compressed models turns out to be less than 2%. Further, the compressed models report acceleration in inference time being 80.79% on VGG16, 74.14% on ResNet50, 42.96% on MobileNet and 11.79% on SqueezeNet.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Algorithm 1
Algorithm 2
Algorithm 3
Algorithm 4
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Data Availability

The datasets generated during and/or analyzed during the current study can be requested from the corresponding author.

Notes

  1. https://github.com/mohit-aren/DE_Compress.

References

  1. Kornblith S, Shlens J, Le QV (2019) Do better imagenet models transfer better? In Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2661–2671

  2. Li H, Ota K, Dong M (2018) Learning iot in edge: deep learning for the internet of things with edge computing. IEEE Netw 32(1):96–101

    Article  Google Scholar 

  3. Wang X, Han Y, Leung VCM, Niyato D, Yan X, Chen X (2020) A comprehensive survey. IEEE Commun Surv Tutorials Converg Edge Comput Deep Learn

  4. Li H, Kadav A, Durdanovic I, Samet H, Graf HP (2016) Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710

  5. Hu Y, Sun S, Li J, Wang X, Gu Q (2018) A novel channel pruning method for deep neural network compression. arXiv preprint arXiv:1805.11394

  6. Gong Y, Liu L, Yang M, Bourdev L (2014) Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115

  7. Li T, Wu B, Yang Y, Fan Y, Zhang Y, Liu W (2019) Compressing convolutional neural networks via factorized convolutional filters. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3977–3986

  8. Wen W, Wu C, Wang Y, Chen Y, Li H (2016) Learning structured sparsity in deep neural networks. Adv Neural Inf Process Syst 29

  9. Cheng Y, Wang D, Zhou P, Zhang T (2017) A survey of model compression and acceleration for deep neural networks. arXiv preprint arXiv:1710.09282

  10. Yang C, An Z, Li C, Diao B, Xu Y (2019) Multi-objective pruning for cnns using genetic algorithm. In: International conference on artificial neural networks, pp 299–305. Springer

  11. Abotaleb AM, Elwakil AT, Hadhoud M (2019) Hybrid genetic based algorithm for cnn ultra compression. In 2019 31st International conference on microelectronics (ICM), pp 199–202. IEEE

  12. Wang Z, Li F, Shi G, Xie X, Wang F (2020) Network pruning using sparse learning and genetic algorithm. Neurocomputing

  13. Samala RK, Chan HP, Hadjiiski LM, Helvie MA, Richter C, Cha K (2018) Evolutionary pruning of transfer learned deep convolutional neural network for breast cancer diagnosis in digital breast tomosynthesis. Phys Med Biol 63(9):095005

  14. Zhou Y, Yen GG, Yi Z (2019) A knee-guided evolutionary algorithm for compressing deep neural networks. IEEE Trans Cybernet 51(3):1626–1638

  15. Fernandes Jr FE, Yen GG (2021) Pruning deep convolutional neural networks architectures with evolution strategy. Inf Sci 552:29–47

  16. Karaboga N, Cetinkaya B (2005) Performance comparison of genetic and differential evolution algorithms for digital fir filter design. In: Advances in information systems: third international conference, ADVIS 2004, Izmir, Turkey, 20--22 Oct2004. Proceedings 3, pp 482–488. Springer

  17. Das S, Abraham A, Konar A (2008) Particle swarm optimization and differential evolution algorithms: technical analysis, applications and hybridization perspectives. In: Advances of computational intelligence in industrial systems, pp 1–38. Springer

  18. Babu BV, Jehan MML (2003) Differential evolution for multi-objective optimization. In: The 2003 congress on evolutionary computation, 2003. CEC’03., vol 4, pp 2696–2703. IEEE

  19. Paterlini S, Krink T (2006) Differential evolution and particle swarm optimisation in partitional clustering. Comput Stat Data Anal 50(5):1220–1247

    Article  MathSciNet  Google Scholar 

  20. Quang BF, Perov VL (1993) New evolutionary genetic algorithms for np-complete combinatorial optimization problems. Biol Cybernet 69(3):229–234

    Article  Google Scholar 

  21. Panchal G, Panchal D (2015) Solving np hard problems using genetic algorithm. Transportation 106:6–2

    Google Scholar 

  22. LeCun Y, Denker J, Solla S (1989) Optimal brain damage. Adv Neural Inf Process Syst 2

  23. Han S, Pool J, Tran J, Dally W (2015) Learning both weights and connections for efficient neural network. Adv Neural Inf Process Syst 28

  24. Han S, Mao H, Dally WJ (2015) Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149

  25. Luo JH, Wu J, Lin W (2017) Thinet: a filter level pruning method for deep neural network compression. In Proceedings of the IEEE international conference on computer vision, pp 5058–5066

  26. He Y, Lin J, Liu Z, Wang H, Li LJ, Han S (2018) Amc: automl for model compression and acceleration on mobile devices. In: Proceedings of the European conference on computer vision (ECCV), pp 784–800

  27. Krizhevsky A, Sutskever I, Hinton G (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105

  28. Mittal D, Bhardwaj S, Khapra MM, Ravindran B (2019) Studying the plasticity in deep convolutional neural networks using random pruning. Mach Vis Appl 30(2):203–216

  29. Das S, Abraham A, Konar A (2007) Automatic clustering using an improved differential evolution algorithm. IEEE Trans Syst Man Cybernet Part A Syst Humans 38(1):218–237

    Article  Google Scholar 

  30. Deb, Roy JS, Gupta B (2014) Performance comparison of differential evolution, particle swarm optimization and genetic algorithm in the design of circularly polarized microstrip antennas. IEEE Trans Antennas Propag 62(8):3920–3928

  31. Ramadas M, Abraham A, Kumar S (2019) FSDE-forced strategy differential evolution used for data clustering. J King Saud Univ Comput Inf Sci 31(1):52–61

    Google Scholar 

  32. Storn R, Price K (1997) Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J Glob Opt 11(4):341–359

    Article  MathSciNet  Google Scholar 

  33. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556

  34. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  35. Landola FN, Han S, Moskewicz MW, Ashraf K, Dally WJ, Keutzer K (2016) Squeezenet: alexnet-level accuracy with 50x fewer parameters and< 0.5 mb model size. arXiv preprint arXiv:1602.07360

  36. Hughes D, Salathé M et al. (2015) An open access repository of images on plant health to enable the development of mobile disease diagnostics. arXiv preprint arXiv:1511.08060

  37. Liu Z, Mu H, Zhang X, Guo Z, Yang X, Cheng KT, Sun J (2019) Metapruning: Meta learning for automatic neural network channel pruning. In Proceedings of the IEEE/CVF international conference on computer vision, pp 3296–3305

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohit Agarwal.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Agarwal, M., Gupta, S.K. & Biswas, K.K. DECACNN: differential evolution-based approach to compress and accelerate the convolution neural network model. Neural Comput & Applic 36, 2665–2681 (2024). https://doi.org/10.1007/s00521-023-09166-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-023-09166-9

Keywords

Navigation