Skip to main content

Stable Low-Rank Tensor Decomposition for Compression of Convolutional Neural Network

  • Conference paper
  • First Online:
Computer Vision – ECCV 2020 (ECCV 2020)

Abstract

Most state-of-the-art deep neural networks are overparameterized and exhibit a high computational cost. A straightforward approach to this problem is to replace convolutional kernels with its low-rank tensor approximations, whereas the Canonical Polyadic tensor Decomposition is one of the most suited models. However, fitting the convolutional tensors by numerical optimization algorithms often encounters diverging components, i.e., extremely large rank-one tensors but canceling each other. Such degeneracy often causes the non-interpretable result and numerical instability for the neural network ne-tuning. This paper is the first study on degeneracy in the tensor decomposition of convolutional kernels. We present a novel method, which can stabilize the low-rank approximation of convolutional kernels and ensure efficient compression while preserving the high-quality performance of the neural networks. We evaluate our approach on popular CNN architectures for image classification and show that our method results in much lower accuracy degradation and provides consistent performance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Rank-1 tensor of size \(n_1\times n_2\times \dots \times n_{d}\) is an outer product of d vectors with dimensions \(n_1, n_1,\dots , n_d\).

  2. 2.

    The mode-j unfolding of an order-d tensor of size \(n_1\times n_2 \times \dots \times n_d\) reorders the elements of the tensor into a matrix with \(n_j\) rows and \(n_1\dots n_{j - 1}n_{j + 1}\dots n_d\) columns.

  3. 3.

    As shown in [53], RMS error is not the only one minimization criterion for a particular computer vision task.

References

  1. Astrid, M., Lee, S.: CP-decomposition with tensor power method for convolutional neural networks compression. In: 2017 IEEE International Conference on Big Data and Smart Computing, BigComp 2017, Jeju Island, South Korea, 13–16 February 2017, pp. 115–118. IEEE (2017). https://doi.org/10.1109/BIGCOMP.2017.7881725

  2. Bulat, A., Kossaifi, J., Tzimiropoulos, G., Pantic, M.: Matrix and tensor decompositions for training binary neural networks. arXiv preprint arXiv:1904.07852 (2019)

  3. Bulat, A., Kossaifi, J., Tzimiropoulos, G., Pantic, M.: Incremental multi-domain learning with network latent tensor factorization. In: AAAI (2020)

    Google Scholar 

  4. Chen, T., Lin, J., Lin, T., Han, S., Wang, C., Zhou, D.: Adaptive mixture of low-rank factorizations for compact neural modeling. In: CDNNRIA Workshop, NIPS (2018)

    Google Scholar 

  5. Cichocki, A., Lee, N., Oseledets, I., Phan, A.H., Zhao, Q., Mandic, D.P.: Tensor networks for dimensionality reduction and large-scale optimization: Part 1 low-rank tensor decompositions. Found. Trends\(\textregistered \) Mach. Learn. 9(4–5), 249–429 (2016)

    Google Scholar 

  6. De Lathauwer, L.: Decompositions of a higher-order tensor in block terms – Part I and II. SIAM J. Matrix Anal. Appl. 30(3), 1022–1066 (2008). http://publi-etis.ensea.fr/2008/De08e. special Issue on Tensor Decompositions and Applications

  7. De Lathauwer, L., De Moor, B., Vandewalle, J.: On the best rank-1 and rank-(R1, R2,., RN) approximation of higher-order tensors. SIAM J. Matrix Anal. Appl. 21, 1324–1342 (2000)

    Google Scholar 

  8. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248–255 (2009)

    Google Scholar 

  9. Denil, M., Shakibi, B., Dinh, L., Ranzato, M., de Freitas, N.: Predicting parameters in deep learning. In: Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, NIPS 2013, pp. 2148–2156. Curran Associates Inc. (2013)

    Google Scholar 

  10. Denton, E.L., Zaremba, W., Bruna, J., LeCun, Y., Fergus, R.: Exploiting linear structure within convolutional networks for efficient evaluation. In: Advances in Neural Information Processing Systems, vol. 27, pp. 1269–1277. Curran Associates, Inc. (2014)

    Google Scholar 

  11. Espig, M., Hackbusch, W., Handschuh, S., Schneider, R.: Optimization problems in contracted tensor networks. Comput. Vis. Sci. 14(6), 271–285 (2011)

    Article  MathSciNet  Google Scholar 

  12. Figurnov, M., Ibraimova, A., Vetrov, D.P., Kohli, P.: PerforatedCNNs: acceleration through elimination of redundant convolutions. In: Advances in Neural Information Processing Systems, pp. 947–955 (2016)

    Google Scholar 

  13. Gao, X., Zhao, Y., Dudziak, Ł., Mullins, R., Xu, C.Z.: Dynamic channel pruning: feature boosting and suppression. In: International Conference on Learning Representations (2019)

    Google Scholar 

  14. Gusak, J., et al.: Automated multi-stage compression of neural networks. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp. 2501–2508 (2019)

    Google Scholar 

  15. Han, S., Pool, J., Tran, J., Dally, W.: Learning both weights and connections for efficient neural network. In: Cortes, C., Lawrence, N.D., Lee, D.D., Sugiyama, M., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 28, pp. 1135–1143 (2015)

    Google Scholar 

  16. Handschuh, S.: Numerical Methods in Tensor Networks. Ph.D. thesis, Faculty of Mathematics and Informatics, University Leipzig, Germany, Leipzig, Germany (2015)

    Google Scholar 

  17. Harshman, R.A.: Foundations of the PARAFAC procedure: models and conditions for an “explanatory” multimodal factor analysis. In: UCLA Working Papers in Phonetics, vol. 16 pp. 1–84 (1970)

    Google Scholar 

  18. Harshman, R.A.: The problem and nature of degenerate solutions or decompositions of 3-way arrays. In: Tensor Decomposition Workshop, Palo Alto, CA (2004)

    Google Scholar 

  19. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)

    Google Scholar 

  20. He, Y., Kang, G., Dong, X., Fu, Y., Yang, Y.: Soft filter pruning for accelerating deep convolutional neural networks. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, pp. 2234–2240 (7 2018)

    Google Scholar 

  21. He, Y., Lin, J., Liu, Z., Wang, H., Li, L.-J., Han, S.: AMC: AutoML for model compression and acceleration on mobile devices. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 815–832. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_48

    Chapter  Google Scholar 

  22. Hillar, C.J., Lim, L.H.: Most tensor problems are NP-hard. J. ACM (JACM) 60(6), 45 (2013)

    Article  MathSciNet  Google Scholar 

  23. Howard, A., et al.: Searching for MobileNetv3. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1314–1324 (2019)

    Google Scholar 

  24. Hua, W., Zhou, Y., De Sa, C.M., Zhang, Z., Suh, G.E.: Channel gating neural networks. In: Wallach, H., Larochelle, H., Beygelzimer, A., d’Alché Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32, pp. 1886–1896 (2019)

    Google Scholar 

  25. Khoromskij, B.: \(O(d \log N) \)-quantics approximation of \(N\)-\(d\) tensors in high-dimensional numerical modeling. Constr. Approximation 34(2), 257–280 (2011)

    Article  MathSciNet  Google Scholar 

  26. Kim, Y., Park, E., Yoo, S., Choi, T., Yang, L., Shin, D.: Compression of deep convolutional neural networks for fast and low power mobile applications. In: 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, 2–4 May 2016, Conference Track Proceedings (2016). http://arxiv.org/abs/1511.06530

  27. Kossaifi, J., Bulat, A., Tzimiropoulos, G., Pantic, M.: T-net: parametrizing fully convolutional nets with a single high-order tensor. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7822–7831 (2019)

    Google Scholar 

  28. Kossaifi, J., Toisoul, A., Bulat, A., Panagakis, Y., Hospedales, T.M., Pantic, M.: Factorized higher-order CNNs with an application to spatio-temporal emotion estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6060–6069 (2020)

    Google Scholar 

  29. Krijnen, W., Dijkstra, T., Stegeman, A.: On the non-existence of optimal solutions and the occurrence of “degeneracy” in the CANDECOMP/PARAFAC model. Psychometrika 73, 431–439 (2008)

    Article  MathSciNet  Google Scholar 

  30. Krizhevsky, A.: Learning multiple layers of features from tiny images. Technical Report TR-2009, University of Toronto, Toronto (2009)

    Google Scholar 

  31. Landsberg, J.M.: Tensors: Geometry and Applications, vol. 128. American Mathematical Society, Providence (2012)

    MATH  Google Scholar 

  32. Lebedev, V.: Algorithms for speeding up convolutional neural networks. Ph.D. thesis, Skoltech, Russia (2018). https://www.skoltech.ru/app/data/uploads/2018/10/Thesis-Final.pdf

  33. Lebedev, V., Ganin, Y., Rakhuba, M., Oseledets, I., Lempitsky, V.: Speeding-up convolutional neural networks using fine-tuned CP-decomposition. In: International Conference on Learning Representations (2015)

    Google Scholar 

  34. Lim, L.H., Comon, P.: Nonnegative approximations of nonnegative tensors. J. Chemom. 23(7–8), 432–441 (2009)

    Article  Google Scholar 

  35. Mitchell, B.C., Burdick, D.S.: Slowly converging PARAFAC sequences: Swamps and two-factor degeneracies. J. Chemom. 8, 155–168 (1994)

    Article  Google Scholar 

  36. Molchanov, D., Ashukha, A., Vetrov, D.: Variational dropout sparsifies deep neural networks. In: Proceedings of the 34th International Conference on Machine Learning - Volume 70, pp. 2498–2507 (2017). JMLR.org

  37. Novikov, A., Podoprikhin, D., Osokin, A., Vetrov, D.: Tensorizing neural networks. In: Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, NIPS 2015, pp. 442–450. MIT Press, Cambridge (2015)

    Google Scholar 

  38. Oseledets, I., Tyrtyshnikov, E.: Breaking the curse of dimensionality, or how to use SVD in many dimensions. SIAM J. Sci. Comput. 31(5), 3744–3759 (2009)

    Article  MathSciNet  Google Scholar 

  39. Paatero, P.: Construction and analysis of degenerate PARAFAC models. J. Chemometrics 14(3), 285–299 (2000)

    Article  Google Scholar 

  40. Phan, A.H., Cichocki, A., Uschmajew, A., Tichavský, P., Luta, G., Mandic, D.: Tensor networks for latent variable analysis: novel algorithms for tensor train approximation. IEEE Trans. Neural Network Learn. Syst. (2020). https://doi.org/10.1109/TNNLS.2019.2956926

    Article  MathSciNet  Google Scholar 

  41. Phan, A.H., Tichavský, P., Cichocki, A.: Tensor deflation for CANDECOMP/PARAFAC. Part 1: alternating subspace update algorithm. IEEE Trans. Sig. Process. 63(12), 5924–5938 (2015)

    Google Scholar 

  42. Phan, A.H., Tichavský, P., Cichocki, A.: Error preserving correction: a method for CP decomposition at a target error bound. IEEE Trans. Signal Process. 67(5), 1175–1190 (2019)

    Article  MathSciNet  Google Scholar 

  43. Phan, A.H., Yamagishi, M., Mandic, D., Cichocki, A.: Quadratic programming over ellipsoids with applications to constrained linear regression and tensor decomposition. Neural Comput. Appl. (2020). https://doi.org/10.1007/s00521-019-04191-z

    Article  Google Scholar 

  44. Rastegari, M., Ordonez, V., Redmon, J., Farhadi, A.: XNOR-Net: ImageNet classification using binary convolutional neural networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 525–542. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_32

    Chapter  Google Scholar 

  45. Rayens, W., Mitchell, B.: Two-factor degeneracies and a stabilization of PARAFAC. Chemometr. Intell. Lab. Syst. 38(2), 173–181 (1997)

    Article  Google Scholar 

  46. Rigamonti, R., Sironi, A., Lepetit, V., Fua, P.: Learning separable filters. In: Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition. pp. 2754–2761. CVPR ’13, IEEE Computer Society, Washington, DC, USA (2013)

    Google Scholar 

  47. de Silva, V., Lim, L.H.: Tensor rank and the ill-posedness of the best low-rank approximation problem. SIAM J. Matrix Anal. Appl. 30, 1084–1127 (2008)

    Article  MathSciNet  Google Scholar 

  48. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: 3rd International Conference on Learning Representations, ICLR (2015)

    Google Scholar 

  49. Stegeman, A., Comon, P.: Subtracting a best rank-1 approximation may increase tensor rank. Linear Algebra Appl. 433(7), 1276–1300 (2010)

    Article  MathSciNet  Google Scholar 

  50. Tan, M., Le, Q.V.: EfficientNet: rethinking model scaling for convolutional neural networks. In: ICML (2019)

    Google Scholar 

  51. Tichavský, P., Phan, A.H., Cichocki, A.: Sensitivity in tensor decomposition. IEEE Signal Process. Lett. 26(11), 1653–1657 (2019)

    Article  Google Scholar 

  52. Tucker, L.R.: Implications of factor analysis of three-way matrices for measurement of change. Probl. Measuring Change 15, 122–137 (1963)

    Google Scholar 

  53. Vasilescu, M.A.O., Terzopoulos, D.: Multilinear subspace analysis of image ensembles. In: 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2003), Madison, WI, USA, 16–22 June 2003, pp. 93–99. IEEE Computer Society (2003). https://doi.org/10.1109/CVPR.2003.1211457

  54. Vervliet, N., Debals, O., Sorber, L., Barel, M.V., Lathauwer, L.D.: Tensorlab 3.0, March 2016. http://www.tensorlab.net

  55. Wang, D., Zhao, G., Li, G., Deng, L., Wu, Y.: Lossless compression for 3DCNNs based on tensor train decomposition. CoRR abs/1912.03647 (2019). http://arxiv.org/abs/1912.03647

  56. Zacharov, I., et al.: Zhores – petaflops supercomputer for data-driven modeling, machine learning and artificial intelligence installed in Skolkovo Institute of Science and Technology. Open Eng. 9(1) (2019)

    Google Scholar 

  57. Zhang, T., Golub, G.H.: Rank-one approximation to high order tensors. SIAM J. Matrix Anal. Appl. 23(2), 534–550 (2001). https://doi.org/10.1137/S0895479899352045

  58. Zhang, X., Zou, J., He, K., Sun, J.: Accelerating very deep convolutional networks for classification and detection. IEEE Trans. Pattern Anal. Mach. Intell. 38(10), 1943–1955 (2016)

    Article  Google Scholar 

  59. Zhuang, Z., et al.: Discrimination-aware channel pruning for deep neural networks. In: Advances in Neural Information Processing Systems, pp. 883–894 (2018)

    Google Scholar 

Download references

Acknowledgements

The work of A.-H. Phan, A. Cichocki, I. Oseledets, J. Gusak, K. Sobolev, K. Sozykin and D. Ermilov was supported by the Ministry of Education and Science of the Russian Federation under Grant 14.756.31.0001. The results of this work were achieved during the cooperation project with Noah’s Ark Lab, Huawei Technologies. The authors sincerely thank the Referees for very constructive comments which helped to improve the quality and presentation of the paper. The computing for this project was performed on the Zhores CDISE HPC cluster at Skoltech [56].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anh-Huy Phan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Phan, AH. et al. (2020). Stable Low-Rank Tensor Decomposition for Compression of Convolutional Neural Network. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, JM. (eds) Computer Vision – ECCV 2020. ECCV 2020. Lecture Notes in Computer Science(), vol 12374. Springer, Cham. https://doi.org/10.1007/978-3-030-58526-6_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-58526-6_31

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-58525-9

  • Online ISBN: 978-3-030-58526-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics