Skip to main content

Investigating Similarity Metrics for Convolutional Neural Networks in the Case of Unstructured Pruning

  • Conference paper
  • First Online:
Pattern Recognition Applications and Methods (ICPRAM 2020)

Abstract

Deep Neural Networks (DNNs) are essential tools of modern science and technology. The current lack of explainability of their inner workings and of principled ways to tame their architectural complexity triggered a lot of research in recent years. There is hope that, by making sense of representations in their hidden layers, we could collect insights on how to reduce model complexity—without performance degradation—by pruning useless connections. It is natural then to ask the following question: how similar are representations in pruned and unpruned models? Even small insights could help in finding principled ways to design good lightweight models, enabling significant savings of computation, memory, time and energy. In this work, we investigate empirically this problem on a wide spectrum of similarity measures, network architectures and datasets. We find that the results depend critically on the similarity measure used and we discuss briefly the origin of these differences, concluding that further investigations are required in order to make substantial advances.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    For spatial dimension we mean the size of each single channel of the image (or analogous two-dimensional structure) after the application of the convolutions operated by the given layer.

  2. 2.

    I.e., for each column, its mean across the instances must be 0.

  3. 3.

    I.e., for each row/column, its mean across the columns/rows must be 0.

  4. 4.

    Such that its value lies between 0 and 1.

  5. 5.

    The exact value depends on the presence of layers or parameters not affected by pruning, such as batch normalization.

  6. 6.

    https://maps.google.com.

  7. 7.

    In that work, we employed Mean SVCCA Similarity only, but the shape produced by PWCCA is very similar.

References

  1. Allen-Zhu, Z., Li, Y., Liang, Y.: Learning and generalization in overparameterized neural networks, going beyond two layers. In: Advances in Neural Information Processing Systems, pp. 6155–6166 (2019)

    Google Scholar 

  2. Ansuini, A., Laio, A., Macke, J.H., Zoccolan, D.: Intrinsic dimension of data representations in deep neural networks. In: NIPS 2019 (2019)

    Google Scholar 

  3. Ansuini, A., Medvet, E., Pellegrino, F.A., Zullich, M.: On the similarity between hidden layers of pruned and unpruned convolutional neural networks. In: De Marsico, M., Sanniti di Baja, G., Fred, A. (eds.) Proceedings of the 9th International Conference on Pattern Recognition Applications and Methods (ICPRAM 2020), pp. 52–59. Scitepress, La Valletta, February 2020

    Google Scholar 

  4. Anwar, S., Hwang, K., Sung, W.: Structured pruning of deep convolutional neural networks. ACM J. Emerg. Technol. Comput. Syst. (JETC) 13(3), 1–18 (2017)

    Article  Google Scholar 

  5. Bures, D.: An extension of Kakutani’s theorem on infinite product measures to the tensor product of semifinite w*-algebras. Trans. Am. Math. Soc. 135, 199–212 (1969)

    MathSciNet  MATH  Google Scholar 

  6. Cortes, C., Mohri, M., Rostamizadeh, A.: Algorithms for learning kernels based on centered alignment. J. Mach. Learn. Res. 13, 795–828 (2012)

    MathSciNet  MATH  Google Scholar 

  7. Cristianini, N., Shawe-Taylor, J., Elisseeff, A., Kandola, J.S.: On kernel-target alignment. In: Advances in Neural Information Processing Systems, pp. 367–373 (2002)

    Google Scholar 

  8. Crowley, E.J., Turner, J., Storkey, A., O’Boyle, M.: Pruning neural networks: is it time to nip it in the bud? arXiv preprint arXiv:1810.04622 (2018)

  9. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR 2009 (2009)

    Google Scholar 

  10. Fong, R., Vedaldi, A.: Net2vec: quantifying and explaining how concepts are encoded by filters in deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8730–8738 (2018)

    Google Scholar 

  11. Frankle, J., Bau, D.: Dissecting pruned neural networks. arXiv preprint arXiv:1907.00262 (2019)

  12. Frankle, J., Carbin, M.: The lottery ticket hypothesis: Finding sparse, trainable neural networks. In: International Conference on Learning Representations (2019). https://openreview.net/forum?id=rJl-b3RcF7

  13. Frankle, J., Dziugaite, G.K., Roy, D.M., Carbin, M.: Stabilizing the lottery ticket hypothesis. arXiv preprint arXiv:1903.01611 (2019)

  14. Google: (sv)cca for representational insights in deep neural networks (2019). https://github.com/google/svcca

  15. Gretton, A., Bousquet, O., Smola, A., Schölkopf, B.: Measuring statistical dependence with Hilbert-Schmidt norms. In: Jain, S., Simon, H.U., Tomita, E. (eds.) ALT 2005. LNCS (LNAI), vol. 3734, pp. 63–77. Springer, Heidelberg (2005). https://doi.org/10.1007/11564089_7

    Chapter  Google Scholar 

  16. Han, S., Pool, J., Tran, J., Dally, W.: Learning both weights and connections for efficient neural network. In: Cortes, C., Lawrence, N.D., Lee, D.D., Sugiyama, M., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 28, pp. 1135–1143. Curran Associates, Inc. (2015)

    Google Scholar 

  17. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  18. Hotelling, H.: Relations between two sets of variates. Biometrika 28(3–4), 321–377 (1936). https://doi.org/10.1093/biomet/28.3-4.321

    Article  MATH  Google Scholar 

  19. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: ICML, pp. 448–456 (2015). http://proceedings.mlr.press/v37/ioffe15.html

  20. Kornblith, S., Norouzi, M., Lee, H., Hinton, G.: Similarity of neural network representations revisited. In: Chaudhuri, K., Salakhutdinov, R. (eds.) Proceedings of the 36th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 97, pp. 3519–3529. PMLR, Long Beach, California, USA, 09–15 June 2019. http://proceedings.mlr.press/v97/kornblith19a.html

  21. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  22. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)

    Google Scholar 

  23. Morcos, A., Raghu, M., Bengio, S.: Insights on representational similarity in neural networks with canonical correlation. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 31, pp. 5732–5741. Curran Associates, Inc. (2018)

    Google Scholar 

  24. Morcos, A., Yu, H., Paganini, M., Tian, Y.: One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers. In: Wallach, H., Larochelle, H., Beygelzimer, A., d’Alché Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32, pp. 4932–4942. Curran Associates, Inc. (2019)

    Google Scholar 

  25. Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., Ng, A.Y.: Reading digits in natural images with unsupervised feature learning (2011)

    Google Scholar 

  26. Paganini, M., Forde, J.: On iterative neural network pruning, reinitialization, and the similarity of masks. arXiv preprint arXiv:2001.05050 (2020)

  27. Qian, N.: On the momentum term in gradient descent learning algorithms. Neural Netw. 12(1), 145–151 (1999)

    Article  Google Scholar 

  28. Raghu, M., Gilmer, J., Yosinski, J., Sohl-Dickstein, J.: SVCCA: singular vector canonical correlation analysis for deep learning dynamics and interpretability. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 6076–6085. Curran Associates, Inc. (2017)

    Google Scholar 

  29. Renda, A., Frankle, J., Carbin, M.: Comparing fine-tuning and rewinding in neural network pruning. In: International Conference on Learning Representations (2020)

    Google Scholar 

  30. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, September 2014

  31. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  32. Tang, S., Maddox, W.J., Dickens, C., Diethe, T., Damianou, A.: Similarity of neural networks with gradients. arXiv preprint arXiv:2003.11498 (2020)

  33. Uurtio, V., Monteiro, J.M., Kandola, J., Shawe-Taylor, J., Fernandez-Reyes, D., Rousu, J.: A tutorial on canonical correlation methods. ACM Comput. Surv. 50(6) (2017). DOIurl10.1145/3136624

    Google Scholar 

  34. Venkatesh, B., Thiagarajan, J.J., Thopalli, K., Sattigeri, P.: Calibrate and prune: improving reliability of lottery tickets through prediction calibration. arXiv preprint arXiv:2002.03875 (2020)

  35. Ye, S., et al.: Adversarial robustness vs. model compression, or both. In: The IEEE International Conference on Computer Vision (ICCV), vol. 2 (2019)

    Google Scholar 

  36. You, Y., Gitman, I., Ginsburg, B.: Large batch training of convolutional networks. arXiv preprint arXiv:1708.03888 (2017)

  37. Zhang, C., Bengio, S., Hardt, M., Recht, B., Vinyals, O.: Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530 (2016)

  38. Zhou, H., Lan, J., Liu, R., Yosinski, J.: Deconstructing lottery tickets: zeros, signs, and the supermask. In: Advances in Neural Information Processing Systems, pp. 3597–3607 (2019)

    Google Scholar 

  39. Zullich, M., Medvet, E., Pellegrino, F.A., Ansuini, A.: Speeding-up pruning for artificial neural networks: introducing accelerated iterative magnitude pruning. In: Proceedings of the 25th International Conference on Pattern Recognition (2021)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Alessio Ansuini , Eric Medvet , Felice Andrea Pellegrino or Marco Zullich .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ansuini, A., Medvet, E., Pellegrino, F.A., Zullich, M. (2020). Investigating Similarity Metrics for Convolutional Neural Networks in the Case of Unstructured Pruning. In: De Marsico, M., Sanniti di Baja, G., Fred, A. (eds) Pattern Recognition Applications and Methods. ICPRAM 2020. Lecture Notes in Computer Science(), vol 12594. Springer, Cham. https://doi.org/10.1007/978-3-030-66125-0_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-66125-0_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-66124-3

  • Online ISBN: 978-3-030-66125-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics