Skip to main content
Log in

A Multilayer Network-Based Approach to Represent, Explore and Handle Convolutional Neural Networks

Cognitive Computation Aims and scope Submit manuscript

Abstract

Deep learning techniques and tools have experienced enormous growth and widespread diffusion in recent years. Among the areas where deep learning has become more widespread there are computational biology and cognitive neuroscience. At the same time, the need for tools able to explore, understand, and possibly manipulate, a deep learning model has strongly emerged. We propose an approach to map a deep learning model into a multilayer network. Our approach is tailored to Convolutional Neural Networks (CNN), but can be easily extended to other architectures. In order to show how our mapping approach enables the exploration and management of deep learning networks, we illustrate a technique for compressing a CNN. It detects whether there are convolutional layers that can be pruned without losing too much information and, in the affirmative case, returns a new CNN obtained from the original one by pruning such layers. We prove the effectiveness of the multilayer mapping approach and the corresponding compression algorithm on the VGG16 network and two benchmark datasets, namely MNIST, and CALTECH-101. In the former case, we obtain a 0.56% increase in accuracy, precision, and recall, and a 21.43% decrease in mean epoch time. In the latter case, we obtain an 11.09% increase in accuracy, 22.27% increase in precision, 38.66% increase in recall, and 47.22% decrease in mean epoch time. Finally, we compare our multilayer mapping approach with a similar one based on single layers and show the effectiveness of the former. We show that a multilayer network-based approach is able to capture and represent the complexity of a CNN. Furthermore, it allows several manipulations on it. An extensive experimental analysis described in the paper demonstrates the suitability of our approach and the goodness of its performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Data Availability

All datasets used in our experiments are public datasets. They are available online. The source code is stored at the GitHub address https://github.com/lucav48/cnn2multilayer.

Notes

  1. An example of aggregated value could be the maximum.

  2. Here and in the following, we will use the symbols \(\mathcal{I}(i,j)\) and \(\mathcal{O}(i,j)\) to denote both the elements of the feature maps and the corresponding nodes of the class network.

  3. http://yann.lecun.com/exdb/mnist/

  4. https://www.nist.gov/srd/nist-special-database-19

  5. http://www.vision.caltech.edu/Image_Datasets/Caltech101/

References

  1. Dargan S, Kumar M, Ayyagari MR, Kumar G. A survey of deep learning and its applications: A new paradigm to machine learning. Archives of Computational Methods in Engineering. 2020;27:1071–92.

    Article  MathSciNet  Google Scholar 

  2. Merzoug MA, Mostefaoui A, Kechout MH, Tamraoui S. Deep learning for resource-limited devices. In: Proc. of the ACM Symposium on QoS and Security for Wireless and Mobile Networks, New York, NY, USA, pp. 81–87. Association for Computing Machinery; 2020.

  3. Choudhary T, Mishra V, Goswami A, Sarangapani J. A comprehensive survey on model compression and acceleration. Artif Intell Rev. 2020;1–43.

  4. Angermueller C, Pärnamaa T, Parts L, Stegle O. Deep learning for computational biology. Mol Syst Biol. 2016;12(7), 878.

  5. Mahmud M, Kaiser MS, McGinnity TM, Hussain A. Deep learning in mining biological data. Cogn Comput. 2021;13(1):1–33.

    Article  Google Scholar 

  6. Chen Y, Zheng B, Zhang Z, Wang Q, Shen C, Zhang Q. Deep learning on mobile and embedded devices: State-of-the-art, challenges, and future directions. ACM 2020;53(4). Association for Computing Machinery

  7. Chen Z, Chen Z, Lin J, Liu S, Li W. Deep neural network acceleration based on low-rank approximated channel pruning. IEEE Trans Circuits Syst I Regul Pap. 2020;67(4):1232–44.

    Article  Google Scholar 

  8. Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. 2017. CoRR abs/1704.04861 arXiv:1704.04861.

  9. Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and \(<\)0.5 MB model size. 2016. arXiv:1602.07360.

  10. Zhang X, Zhou X, Lin M, Sun J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In: Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18), Salt Lake City, Utah, USA; 2018. pp. 6848–6856. IEEE.

  11. Mehta S, Rastegari M, Caspi A, Shapiro L, Hajishirzi H. Espnet: Efficient spatial pyramid of dilated convolutions for semantic segmentation. In: Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18), Salt Lake City, Utah, USA; 2018. pp. 6848–6856. IEEE.

  12. Khan A, Sohail A, Zahoora U, Qureshi AS. A survey of the recent architectures of deep convolutional neural networks. Artif Intell Rev. 2020;53(8):5455–516. https://doi.org/10.1007/s10462-020-09825-6.

    Article  Google Scholar 

  13. Kivela M, Arenas A, Barthelemy M, Gleeson JP, Moreno Y, Porter MA. Multilayer networks. J Complex Networks. 2014;2(3):203–71. https://doi.org/10.1093/comnet/cnu016.

    Article  Google Scholar 

  14. Chen S, Zhao Q. Shallowing deep networks: Layer-wise pruning based on feature representations. IEEE Trans Pattern Anal Mach Intell. 2019;41(12):3048–56.

    Article  Google Scholar 

  15. Suzuki K, Horiba I, Sugie N. A simple neural network pruning algorithm with application to filter synthesis. Neural Process Lett. 2001;13(1):43–53. https://doi.org/10.1023/A:1009639214138.

    Article  MATH  Google Scholar 

  16. Srinivas S, Babu RV. Data-free parameter pruning for deep neural networks. CoRR abs/1507.06149. 2015.

  17. Ardakani A, Condo C, Gross WJ. Sparsely-connected neural networks: Towards efficient VLSI implementation of deep neural networks. CoRR abs/1611.01427. 2016.

  18. Babaeizadeh M, Smaragdis P, Campbell RH. A simple yet effective method to prune dense layers of neural networks. In: Proc. of the International Conference on Learning Representations (ICLR’17), Toulon, France; 2017. ICLR

  19. Yang Z, Moczulski M, Denil M, De Freitas N, Song L, Wang Z. Deep fried convnets. In: Proc. of the IEEE International Conference on Computer Vision (ICCV’15); 2015. pp. 1476–1483. https://doi.org/10.1109/ICCV.2015.173.

  20. Lin M, Chen Q, Yan S. Network In Network. 2014.

  21. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A. Going deeper with convolutions. In: Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’15); 2015. pp. 1–9.

  22. Guo Y, Yao A, Chen Y. Dynamic network surgery for efficient dnns. In: Proc. of the International Conference on Neural Information Processing Systems (NIPS’16). NIPS’16, pp. 1387–1395. Curran Associates Inc., Red Hook, NY, USA; 2016.

  23. Li H, Kadav A, Durdanovic I, Samet H, Graf HP. Pruning filters for efficient convnets. CoRR abs/1608.08710. 2016.

  24. Molchanov P, Tyree S, Karras T, Aila T, Kautz J. Pruning convolutional neural networks for resource efficient inference. In: Proc. of the International Conference on Learning Representations (ICLR’17), Toulon, France; 2017. ICLR

  25. He Y, Zhang X, Sun J. Channel pruning for accelerating very deep neural networks. In: Proc. of the IEEE International Conference on Computer Vision (ICCV’17); 2017. pp. 1398–1406. https://doi.org/10.1109/ICCV.2017.155.

  26. Liu B, Wang M, Foroosh H, Tappen M, Penksy M. Sparse convolutional neural networks. In: Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’15); 2015. pp. 806–814. https://doi.org/10.1109/CVPR.2015.7298681.

  27. Zhu M, Gupta S. To prune, or not to prune: exploring the efficacy of pruning for model compression. 2017.

  28. Albu F, Mateescu A, Dumitriu N. Architecture selection for a multilayer feedforward network. In: Proc. of International Conference on Microelectronics and Computer Science (ICMCS’97); 1997. pp. 131–134.

  29. Czernichow T, Germond A, Dorizzi B, Caire P. Improving recurrent network load forecasting. In: Proc. of International Conference on Neural Networks (ICNN’95), vol. 2. Perth, WA, Australia; 1995. pp. 899–904. IEEE

  30. Chen W, Wilson JT, Tyree S, Weinberger KQ, Chen Y. Compressing neural networks with the hashing trick. In: Proc. of the International Conference on Machine Learning (ICML’15); 2015. pp. 2285–2294. http://www.JMLR.org, Lille, France.

  31. Courbariaux M, Bengio Y, David JP. Binaryconnect: Training deep neural networks with binary weights during propagations. In: Proc. of the International Conference on Neural Information Processing Systems (NIPS’15). NIPS’15; 2015. pp. 3123–3131. MIT Press, Cambridge, MA, USA.

  32. Lin Z, Courbariaux M, Memisevic R, Bengio Y. Neural networks with few multiplications. In: Bengio, Y., LeCun, Y. (eds.) Proc. of the International Conference on Learning Representations (ICLR’16), San Juan, Puerto Rico; 2016.

  33. Hubara I, Courbariaux M, Soudry D, El-Yaniv R, Bengio Y. Binarized neural networks. In: Proc. of the International Conference on Neural Information Processing Systems (NIPS’16), Red Hook, NY, USA; 2016. pp. 4114–4122. Curran Associates Inc.

  34. Hou L, Yao Q, Kwok JT. Loss-aware binarization of deep networks. In: Proc. of the International Conference on Learning Representations (ICLR’17), Toulon, France; 2017. ICLR

  35. Hou L, Kwok JT. Loss-aware weight quantization of deep networks. In: Proc. of the International Conference on Learning Representations (ICLR’18). ICLR, Vancouver, BC, Canada; 2018.

  36. Zhou S, Ni Z, Zhou X, Wen H, Wu Y, Zou Y. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. CoRR abs/1606.06160. 2016. arXiv:1606.06160.

  37. Lin JH, Xing T, Zhao R, Zhang Z, Srivastava M, Tu Z, Gupta RK. Binarized convolutional neural networks with separable filters for efficient hardware acceleration. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW); 2017. pp. 344–352. https://doi.org/10.1109/CVPRW.2017.48.

  38. Han S, Mao H, Dally WJ. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. 2016.

  39. Anwar S, Hwang K, Sung W. Structured pruning of deep convolutional neural networks. J Emerg Technol Comput Syst. 2017;13(3). https://doi.org/10.1145/3005348.

  40. Hinton G, Vinyals O, Dean J. Distilling the knowledge in a neural network. 2015.

  41. Romero A, Ballas N, Kahou SE, Chassang A, Gatta C, Bengio Y. Fitnets: Hints for thin deep nets. In: Proc. of the International Conference on Learning Representations (ICLR’15). 2015.

  42. Kim J, Park S, Kwak N. Paraphrasing complex network: Network compression via factor transfer. In: Advances in Neural Information Processing Systems, vol. 31. Curran Associates, Inc.; 2018.

  43. Srinivas S, Fleuret F. Knowledge transfer with Jacobian matching. In: Proc. of the International Conference on Machine Learning (ICLR’18); 2018. vol. 80, pp. 4723–4731. PMLR

  44. Polino A, Pascanu R, Alistarh D. Model compression via distillation and quantization. In: Proc. of the International Conference on Learning Representations (ICLR’18). ICLR, Vancouver, BC, Canada; 2018.

  45. Lan X, Zhu X, Gong S. Knowledge distillation by on-the-fly native ensemble. In: Advances in Neural Information Processing Systems; 2018. vol. 31. Curran Associates, Inc.

  46. You J, Leskovec J, He K, Xie S. Graph structure of neural networks. In: Proc. of the International Conference on Machine Learning (ICML’20); 2020. vol. 119, pp. 10881–10891. PMLR

  47. Altas D, Cilingirturk AM, Gulpinar V. Analyzing the process of the artificial neural networks by the help of the social network analysis. New Knowledge Journal of Science. 2013;2:80–91.

    Google Scholar 

  48. Sainath TN, Kingsbury B, Sindhwani V, Arisoy E, Ramabhadran B. Low-rank matrix factorization for deep neural network training with high-dimensional output targets. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing; 2013. pp. 6655–6659. https://doi.org/10.1109/ICASSP.2013.6638949.

  49. Denton EL, Zaremba W, Bruna J, LeCun Y, Fergus R. Exploiting linear structure within convolutional networks for efficient evaluation. In: Advances in Neural Information Processing Systems; 2014. vol. 27. Curran Associates, Inc.

  50. Jaderberg M, Vedaldi A, Zisserman A. Speeding up convolutional neural networks with low rank expansions. In: Proc. of British Machine Vision Conference (BMVC’14); 2014. BMVA Press

  51. Kim Y, Park E, Yoo S, Choi T, Yang L, Shin D. Compression of deep convolutional neural networks for fast and low power mobile applications. In: Bengio, Y., LeCun, Y. (eds.) 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings. 2016.

  52. Ioannou Y, Robertson DP, Shotton J, Cipolla R, Criminisi A. Training cnns with low-rank filters for efficient image classification. In: Bengio, Y., LeCun, Y. (eds.) 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings. 2016.

  53. Alvarez JM, Salzmann M. Compression-aware training of deep networks. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. NIPS’17, pp. 856–867. Curran Associates Inc., Red Hook, NY, USA; 2017.

  54. Zhang X, Zou J, He K, Sun J. Accelerating very deep convolutional networks for classification and detection. IEEE Trans Pattern Anal Mach Intell. 2016;38(10):1943–55. https://doi.org/10.1109/TPAMI.2015.2502579.

    Article  Google Scholar 

  55. Li C, Shi CR. Constrained optimization based low-rank approximation of deep neural networks. In: Proc. of the European Conference Computer (ECCV’18); 2018. vol. 11214, pp. 746–761. Springer, Munich, Germany.

  56. Yao K, Cao F, Leung Y, Liang J. Deep Neural Network Compression through Interpretability-Based Filter Pruning. Pattern Recognition, 108056; 2021. Elsevier

  57. Kahng M, Andrews PY, Kalro A, Chau DH. Activis: Visual exploration of industry-scale deep neural network models. IEEE Trans Visual Comput Graphics. 2018;24(1):88–97. https://doi.org/10.1109/TVCG.2017.2744718.

    Article  Google Scholar 

  58. Hohman F, Park H, Robinson C, Polo Chau DH. Summit: Scaling deep learning interpretability by visualizing activation and attribution summarizations. IEEE Trans Visual Comput Graphics. 2020;26(1):1096–106. https://doi.org/10.1109/TVCG.2019.2934659.

    Article  Google Scholar 

  59. Zhang Q, Cao R, Shi F, Wu YN, Zhu SC. Interpreting cnn knowledge via an explanatory graph. Proceedings of the AAAI Conference on Artificial Intelligence. 2018;32(1).

  60. Zhang Q, Yang Y, Ma H, Wu YN. Interpreting cnns via decision trees. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2019. pp. 6254–6263. https://doi.org/10.1109/CVPR.2019.00642.

  61. Zhang Q, Cao R, Wu YN, Zhu SC. Mining object parts from cnns via active question-answering. In: Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17), Los Alamitos, CA, USA; 2017. pp. 3890–3899. IEEE

  62. Manel Hmimida RK. Community detection in multiplex networks: A seed-centric approach. Networks & Heterogeneous Media. 2015;10(1):71–85.

    Article  MathSciNet  MATH  Google Scholar 

  63. Battiston F, Nicosia V, Latora V. Structural measures for multiplex networks. Phys Rev E 2014;89, 032804. https://doi.org/10.1103/PhysRevE.89.032804.

  64. Shannon CE. A mathematical theory of communication. Bell Syst Tech J. 1948;27(3):379–423. https://doi.org/10.1002/j.1538-7305.1948.tb01338.x.

    Article  MathSciNet  MATH  Google Scholar 

  65. Gowdra N, Sinha R, MacDonell S, Yan WQ. Mitigating severe over-parameterization in deep convolutional neural networks through forced feature abstraction and compression with an entropy-based heuristic. Pattern Recogn. 2021;108057. Elsevier

  66. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. In: Bengio, Y., LeCun, Y. (eds.) Proc. of the International Conference on Learning Representations (ICLR’15); 2015.

  67. Alvear-Sandoval RF, Sancho-Gomez JL, Figueiras-Vidal AR. On improving cnns performance: The case of mnist. Information Fusion. 2019;52:106–9. https://doi.org/10.1016/j.inffus.2018.12.005.

    Article  Google Scholar 

  68. Angelov P, Soares E. Towards explainable deep neural networks (xdnn). Neural Netw. 2020;130:185–94. https://doi.org/10.1016/j.neunet.2020.07.010.

    Article  Google Scholar 

  69. Ferguson M, Ak R, Lee YTT, Law KH. Automatic localization of casting defects with convolutional neural networks. In: 2017 IEEE International Conference on Big Data (Big Data); 2017. pp. 1726–1735. https://doi.org/10.1109/BigData.2017.8258115.

  70. Han S, Pool J, Tran J, Dally WJ. Learning both weights and connections for efficient neural networks. In: Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1. NIPS’15, pp. 1135–1143. MIT Press, Cambridge, MA, USA; 2015.

  71. Liu Z, Li J, Shen Z, Huang G, Yan S, Zhang C. Learning Efficient Convolutional Networks through Network Slimming. arXiv. 2017. https://doi.org/10.48550/ARXIV.1708.06519.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luca Virgili.

Ethics declarations

Ethics Approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Conflict of Interest

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Amelio, A., Bonifazi, G., Corradini, E. et al. A Multilayer Network-Based Approach to Represent, Explore and Handle Convolutional Neural Networks. Cogn Comput 15, 61–89 (2023). https://doi.org/10.1007/s12559-022-10084-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12559-022-10084-6

Keywords

Navigation