Skip to main content

Advertisement

Log in

Localization and reduction of redundancy in CNN using L1-sparsity induction

  • Original Research
  • Published:
Journal of Ambient Intelligence and Humanized Computing Aims and scope Submit manuscript

Abstract

Nowadays, convolutional neural networks (CNNs) have achieved tremendous performance in many machine learning areas. However, using a large number of parameters leads to the redundancy problem, which negatively impacts the performance of CNNs. Indeed, many kernels are redundant and can be taken off from the network without much loss of performance. In this paper, we propose a new optimization model for localizing and removing the redundancy in CNN. In fact, Unlike numerous existing methods where they only reduce the redundancy, our proposal also allows to localize the distribution of redundancy in CNNs. The suggested model consists of two stages: in the first one, a dataset is used to train a specific CNN generating a learned CNN with optimal parameters. These later are combined with a decision \(L_{1}\)-sparsity optimization model for detecting and reducing the unwanted kernels. At the end, the evolutionary genetic algorithm is adapted to solve the proposed model generating finally an optimal CNN with prior information about the redundancy distribution. The performance of our approach has been shown and demonstrated by several experiments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Data availability

The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

References

  • Bai C, Huang L, Pan X, Zheng J, Chen S (2018) Optimization of deep convolutional neural network for large scale image retrieval. Neurocomputing 303:60–67

    Article  Google Scholar 

  • Berthelier A, Yongzhe Y, Thierry C, Christophe B, Stefan D, Christophe G (2021) Learning sparse filters in deep convolutional neural networks with a \(l_1/l_2\) pseudo-norm. In: International Conference on Pattern Recognition, pages 662–676. Springer

  • Chen H, Song Y, Li X (2019) A deep learning framework for identifying children with adhd using an eeg-based brain network. Neurocomputing 356:83–96

    Article  Google Scholar 

  • De Maio C, Fenza G, Gallo M, Loia V, Parente M (2019) Time-aware adaptive tweets ranking through deep learning. Fut Gen Comput Syst 93:924–932

    Article  Google Scholar 

  • Deb K, Amrit P, Sameer A, Meyarivan TAMT (2002) A fast and elitist multiobjective genetic algorithm: Nsga-ii. IEEE Trans Evolut Computat 6(2):182–197

    Article  Google Scholar 

  • Denil M, Babak S, Laurent D, Marc’Aurelio R, Nando DF (2013) Predicting parameters in deep learning. arXiv preprint arXiv:1306.0543

  • Ding H, Chen K, Yuan Y, Meng C, Lei S, Sen L, Qiang H. A compact cnn-dblstm based character model for offline handwriting recognition with tucker decomposition. In: 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), volume 1, pages 507–512. IEEE

  • Gen M, Cheng R (1999) Genetic algorithms and engineering optimization, volume 7. Wiley

  • Goldberg DE (2006) Genetic algorithms. Pearson Education India

  • Gomes L (2014) Machine-learning maestro michael jordan on the delusions of big data and other huge engineering efforts. IEEE spectrum 20

  • He K, Xiangyu Z, Shaoqing R, Jian S (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778

  • Hinton GE, Simon O, Yee-Whye T (2006) A fast learning algorithm for deep belief nets. Neural computat 18(7):1527–1554

    Article  MathSciNet  MATH  Google Scholar 

  • Holland JH, et al. (1992) Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. MIT press

  • Hoseini F, Shahbahrami A, Bayat P (2019) Adaptahead optimization algorithm for learning deep cnn applied to mri segmentation. J. Digit. Imaging 32(1):105–115

    Article  Google Scholar 

  • Hssayni EH, Joudar N-E, Ettaouil M (2022) Krr-cnn: kernels redundancy reduction in convolutional neural networks. Neural Comput Appl 34(3):2443–2454

    Article  Google Scholar 

  • Ide H, Kobayashi T, Watanabe K, Kurita T (2020) Robust pruning for efficient cnns. Pattern Recogn Lett 135:90–98

    Article  Google Scholar 

  • Jaderberg M, Vedaldi A, Zisserman A (2014) Speeding up convolutional neural networks with low rank expansions. arXiv preprint arXiv:1405.3866

  • Joudar N-E, Ettaouil M (2019) Mathematical mixed-integer programming for solving a new optimization model of selective image restoration: modelling and resolution by chn and ga. Circ Syst Signal Process 38(5):2072–2096

    Article  MathSciNet  Google Scholar 

  • Junior FEF, Yen GG (2019) Particle swarm optimization of deep neural networks architectures for image classification. Swarm Evolut Computat 49:62–74

    Article  Google Scholar 

  • Krizhevsky A, Geoffrey H, et al. (2009) Learning multiple layers of features from tiny images. In: Technical report

  • Krizhevsky A, Ilya S, Geoffrey EH (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pages 1097–1105

  • Lebedev V, Yaroslav G, Maksim R, Ivan O, Victor L (2014) Speeding-up convolutional neural networks using fine-tuned cp-decomposition. arXiv preprint arXiv:1412.6553

  • LeCun Y, Bernhard EB, John SD, Donnie H, Richard EH, Wayne EH, Lawrence DJ (1990) Handwritten digit recognition with a back-propagation network. In: Advances in neural information processing systems, pages 396–404

  • LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324

    Article  Google Scholar 

  • Li Z, Dong M, Wen S, Xiang H, Zhou P, Zeng Z (2019) Clu-cnns: object detection for medical images. Neurocomputing 350:53–59

  • Lin M, Chen Q, Yan S (2013) Network in network. arXiv preprint arXiv:1312.4400

  • Liu W, Wang Z, Liu X, Zeng N, Liu Y, Alsaadi Fuad E (2017) A survey of deep neural network architectures and their applications. Neurocomputing 234:11–26

    Article  Google Scholar 

  • Louati H, Bechikh S, Louati A, Hung C-C, Ben Said L (2021) Deep convolutional neural network architecture design as a bi-level optimization problem. Neurocomputing

  • Ma R, Miao J, Niu L, Zhang P (2019) Transformed 1 regularization for learning sparse deep neural networks. Neural Netw 119:286–298

    Article  MATH  Google Scholar 

  • Mahdavifar S, Ghorbani AA (2019) Application of deep learning to cybersecurity: a survey. Neurocomputing 347:149–176

    Article  Google Scholar 

  • Ostad-Ali-Askari K, Shayan M (2021) Subsurface drain spacing in the unsteady conditions by hydrus-3d and artificial neural networks. Arab J Geosci 14(18):1–14

    Article  Google Scholar 

  • Ostad-Ali-Askari K, Shayannejad M, Ghorbanizadeh-Kharazi H (2017) Artificial neural network for modeling nitrate pollution of groundwater in marginal area of zayandeh-rood river, isfahan, iran. KSCE J Civil Eng 21(1):134–140

    Article  Google Scholar 

  • Passricha V, Aggarwal RK (2019) Pso-based optimized cnn for hindi asr. Int J Speech Technol 22(4):1123–1133

    Article  Google Scholar 

  • Połap D (2020) An adaptive genetic algorithm as a supporting mechanism for microscopy image analysis in a cascade of convolution neural networks. Appl Soft Comput 97:106824

  • Połap D, Włodarczyk-Sielicka M, Wawrzyniak N (2022) Automatic ship classification for a riverside monitoring system using a cascade of artificial intelligence techniques including penalties and rewards. ISA Trans 121:232–239

    Article  Google Scholar 

  • Ranzato M, Boureau Y-L, Cun YL (2008) Sparse feature learning for deep belief networks. In: Advances in neural information processing systems, pages 1185–1192

  • Sainath TN, Brian K, Vikas S, Ebru A, Bhuvana R (2013) Low-rank matrix factorization for deep neural network training with high-dimensional output targets. In: 2013 IEEE international conference on acoustics, speech and signal processing, pages 6655–6659. IEEE

  • Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556

  • Singh A, Rajan P, Bhavsar A (2020) Svd-based redundancy removal in 1-d cnns for acoustic scene classification. Pattern Recogn Lett 131:383–389

    Article  Google Scholar 

  • Somu N, Gauthama Raman MR, Krithi R (2021) A deep learning framework for building energy consumption forecast. Renew Sustain Energy Rev 137:110591

  • Tai C, Tong X, Yi Z, Xiaogang W, et al. (2015) Convolutional neural networks with low-rank regularization. arXiv preprint arXiv:1511.06067

  • Tibshirani R (2011) Regression shrinkage and selection via the lasso: a retrospective. J R Stat Soc Seri B (Stat Methodol) 73(3):273–282

    Article  MathSciNet  MATH  Google Scholar 

  • Yao P, Huaqiang W, Bin G, Jianshi T, Qingtian Z, Wenqiang Z, Joshua Y, He Q (2020) Fully hardware-implemented memristor convolutional neural network. Nature 577(7792):641–646

    Article  Google Scholar 

  • Zhang Y, Zhu F (2021) A kernel-based weight decorrelation for regularizing cnns. Neurocomputing 429:47–59

    Article  Google Scholar 

  • Zhang Q, Zhang M, Chen T, Sun Z, Ma Y, Bei Yu (2019) Recent advances in convolutional neural network acceleration. Neurocomputing 323:37–51

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to El houssaine Hssayni.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hssayni, E., Joudar, NE. & Ettaouil, M. Localization and reduction of redundancy in CNN using L1-sparsity induction. J Ambient Intell Human Comput 14, 13715–13727 (2023). https://doi.org/10.1007/s12652-022-04025-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12652-022-04025-2

Keywords

Navigation