Skip to main content
Log in

Tensor neural network models for tensor singular value decompositions

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

Tensor decompositions have become increasingly prevalent in recent years. Traditionally, tensors are represented or decomposed as a sum of rank-one outer products using either the CANDECOMP/PARAFAC, the Tucker model, or some variations thereof. The motivation of these decompositions is to find an approximate representation for a given tensor. The main propose of this paper is to develop two neural network models for finding an approximation based on t-product for a given third-order tensor. Theoretical analysis shows that each of the neural network models ensures the convergence performance. The computer simulation results further substantiate that the models can find effectively the left and right singular tensor subspace.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. The database can be obtained from https://media.xiph.org/video/derf/.

References

  1. Braman, K.: Third-order tensors as linear operators on a space of matrices. Linear Algebra Appl. 433, 1241–1253 (2010)

    MathSciNet  MATH  Google Scholar 

  2. Bunsegerstner, A., Byers, R., Mehrmann, V., Nichols, N.: Numerical computation of an analytic singular value decomposition of a matrix valued function. Numer. Math. 60, 1–39 (1991)

    MathSciNet  MATH  Google Scholar 

  3. Cardoso, J.: High-order contrasts for independent component analysis. Neural Comput. 11, 157–192 (1999)

    Google Scholar 

  4. Carroll, J., Chang, J.: Analysis of individual differences in multidimensional scaling via an \(n\)-way generalization of “Eckart-Young” decomposition. Psychometrika 35, 283–319 (1970)

    MATH  Google Scholar 

  5. Che, M., Cichocki, A., Wei, Y.: Neural networks for computing best rank-one approximations of tensors and its applications. Neurocomputing 267, 114–133 (2017)

    Google Scholar 

  6. Che, M., Wei, Y.: Randomized algorithms for the approximations of Tucker and the tensor train decompositions. Adv. Comput. Math. 45, 395–428 (2019)

    MathSciNet  MATH  Google Scholar 

  7. Cichocki, A., Lee, N., Oseledets, I.V., Phan, A.H., Zhao, Q., Mandic, D.P.: Tensor networks for dimensionality reduction and large-scale optimization: part 1 low-rank tensor decompositions. Found. Trends Mach. Learn. 9, 249–429 (2016)

    MATH  Google Scholar 

  8. Cichocki, A., Lee, N., Oseledets, I.V., Phan, A.H., Zhao, Q., Mandic, D.P.: Tensor networks for dimensionality reduction and large-scale optimization: part 2 applications and future perspectives. Found. Trends Mach. Learn. 9, 431–673 (2017)

    MATH  Google Scholar 

  9. Cichocki, A., Mandic, D., De Lathauwer, L., Zhou, G., Zhao, Q., Caiafa, C., Phan, H.: Tensor decompositions for signal processing applications: from two-way to multiway component analysis. IEEE Signal Process. Mag. 32, 145–163 (2015)

    Google Scholar 

  10. Cichocki, A., Unbehauen, R.: Neural Networks for Optimization and Signal Processing. Wiley, New York (1993)

    MATH  Google Scholar 

  11. Cichocki, A., Zdunek, R., Phan, A., Amari, S.: Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-way Data Analysis and Blind Source Separation. Wiley, New York (2009)

    Google Scholar 

  12. Comon, P.: Independent component analysis, a new concept? Sig. Process. 36, 287–314 (1994)

    MATH  Google Scholar 

  13. Comon, P.: Tensor decompositions: state of the art and applications. In: Mathematics in Signal Processing, V (Coventry, 2000), vol. 71 of Institute of Mathematics Applications Conference Series New Series, Oxford Univ. Press, Oxford, pp. 1–24. (2002)

  14. Comon, P., Golub, G., Lim, L., Mourrain, B.: Symmetric tensors and symmetric tensor rank. SIAM J. Matrix Anal. Appl. 30, 1254–1279 (2008)

    MathSciNet  MATH  Google Scholar 

  15. De Lathauwer, L., De Moor, B., Vandewalle, J.: A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 21, 1253–1278 (2000)

    MathSciNet  MATH  Google Scholar 

  16. De Lathauwer, L., De Moor, B., Vandewalle, J.: On the best rank-1 and rank-\((r_1, r_2,\dots, r_n)\) approximation of higher-order tensors. SIAM J. Matrix Anal. Appl. 21, 1324–1342 (2000)

    MathSciNet  MATH  Google Scholar 

  17. De Lathauwer, L., Hoegaerts, L., Vandewalle, J.: A Grassmann-Rayleigh quotient iteration for dimensionality reduction in ICA. In: International Conference on Independent Component Analysis and Signal Separation, Springer, Berlin, pp. 335–342 (2004)

  18. Diamantaras, K., Kung, S.: Cross-correlation neural network models. IEEE Trans. Signal Process. 42, 3218–3223 (1994)

    Google Scholar 

  19. Dieci, L., Eirola, T.: On smooth decompositions of matrices. SIAM J. Matrix Anal. Appl. 20, 800–819 (1999)

    MathSciNet  MATH  Google Scholar 

  20. Eldén, L., Savas, B.: A Newton–Grassmann method for computing the best multilinear rank-\((r_1, r_2, r_3)\) approximation of a tensor. SIAM J. Matrix Anal. Appl. 31, 248–271 (2009)

    MathSciNet  MATH  Google Scholar 

  21. Feng, D., Bao, Z., Zhang, X.: A cross-associative neural network for SVD of non-squared data matrix in signal processing. IEEE Trans. Neural Netw. 12, 1215–1221 (2001)

    Google Scholar 

  22. Fiori, S.: Singular value decomposition learning on double Stiefel manifold. Int. J. Neural Syst. 13, 155–170 (2003)

    Google Scholar 

  23. Goreinov, S., Oseledets, I., Savostyanov, D.: Wedderburn rank reduction and Krylov subspace method for tensor approximation. Part 1: Tucker case. SIAM J. Sci. Comput. 34, A1–A27 (2012)

    MathSciNet  MATH  Google Scholar 

  24. Grasedyck, L.: Hierarchical singular value decomposition of tensors. SIAM J. Matrix Anal. Appl. 31, 2029–2054 (2010)

    MathSciNet  MATH  Google Scholar 

  25. Grasedyck, L., Kressner, D., Tobler, C.: A literature survey of low-rank tensor approximation techniques. GAMM-Mitteilung. 36, 53–78 (2013)

    MathSciNet  MATH  Google Scholar 

  26. Hackbusch, W.: Tensor Spaces and Numerical Tensor Calculus, vol. 42. Springer, Berlin (2012)

    MATH  Google Scholar 

  27. Hirsch, M., Smale, S.: Differential Equations, Dynamical Systems and Linear Algebra. Academic Press, San Diego (1974)

    MATH  Google Scholar 

  28. Hirsch, M., Smale, S.: The Stability of Dynamical Systems. SIAM, Philadelphia (1976)

    Google Scholar 

  29. Ishteva, M., Absil, P., Van Huffel, S., De Lathauwer, L.: Best low multilinear rank approximation of higher-order tensors, based on the Riemannian trust-region scheme. SIAM J. Matrix Anal. Appl. 32, 115–135 (2011)

    MathSciNet  MATH  Google Scholar 

  30. Ishteva, M., De Lathauwer, L., Absil, P., Van Huffel, S.: Differential-geometric Newton method for the best rank-\((r_1, r_2, r_3)\) approximation of tensors. Numer. Algorithms 51, 179–194 (2009)

    MathSciNet  MATH  Google Scholar 

  31. Kilmer, M., Braman, K., Hao, N., Hoover, R.: Third-order tensors as operators on matrices: a theoretical and computational framework with applications in imaging. SIAM J. Matrix Anal. Appl. 34, 148–172 (2013)

    MathSciNet  MATH  Google Scholar 

  32. Kilmer, M., Martin, C.: Factorization strategies for third-order tensors. Linear Algebra Appl. 435, 641–658 (2011)

    MathSciNet  MATH  Google Scholar 

  33. Koch, O., Lubich, C.: Dynamical low-rank approximation. SIAM J. Matrix Anal. Appl. 29, 434–454 (2007)

    MathSciNet  MATH  Google Scholar 

  34. Koch, O., Lubich, C.: Dynamical tensor approximation. SIAM J. Matrix Anal. Appl. 31, 2360–2375 (2010)

    MathSciNet  MATH  Google Scholar 

  35. Kolda, T., Bader, B.: Tensor decompositions and applications. SIAM Rev. 51, 455–500 (2009)

    MathSciNet  MATH  Google Scholar 

  36. Liao, L., Qi, H., Qi, L.: Neurodynamical optimization. J. Global Optim. 28, 175–195 (2004)

    MathSciNet  MATH  Google Scholar 

  37. Lu, C.: Tensor–Tensor Product Toolbox, Carnegie Mellon University, (2018). https://github.com/canyilu/tproduct

  38. Lu, C., Feng, J., Chen, Y., Liu, W., Lin, Z., Yan, S.: Tensor robust principal component analysis with a new tensor nuclear norm, IEEE Trans. Pattern Anal. Mach. Intell. (2019). https://doi.org/10.1109/tpami.2019.2891760

    Google Scholar 

  39. Lubich, C., Rohwedder, T., Schneider, R., Vandereycken, B.: Dynamical approximation by hierarchical tucker and tensor-train tensors. SIAM J. Matrix Anal. Appl. 34, 470–494 (2013)

    MathSciNet  MATH  Google Scholar 

  40. Miao, Y., Qi, L., Wei, Y.: Generalized tensor function via the tensor singular value decomposition based on the T-product. Linear Algebra Appl. 590, 258–303 (2020)

    MathSciNet  Google Scholar 

  41. Newman, E., Horesh, L., Avron, H., Kilmer, M.: Stable tensor neural networks for rapid deep learning, arXiv:1811.06569v1 (2018)

  42. Nikias, C., Mendel, J.: Signal processing with higher-order spectra. IEEE Signal Process. Mag. 10, 10–37 (1993)

    Google Scholar 

  43. Oseledets, I.V.: Tensor-train decomposition. SIAM J. Sci. Comput. 33, 2295–2317 (2011)

    MathSciNet  MATH  Google Scholar 

  44. Qiao, S., Wang, X., Wei, Y.: Two finite-time convergent Zhang neural network models for time-varying complex matrix Drazin inverse. Linear Algebra Appl. 542, 101–117 (2017)

    MathSciNet  MATH  Google Scholar 

  45. Rodriguezvazquez, A., Dominguezcastro, R., Rueda, A., Huertas, J., Sanchezsinencio, E.: Nonlinear switched capacitor ‘neural’ networks for optimization problems. IEEE Trans. Circuits Syst. 37, 384–398 (1990)

    MathSciNet  Google Scholar 

  46. Savas, B., Eldén, L.: Krylov-type methods for tensor computations I. Linear Algebra Appl. 438, 891–918 (2013)

    MathSciNet  MATH  Google Scholar 

  47. Savas, B., Lim, L.: Quasi-Newton methods on Grassmannians and multilinear approximations of tensors. SIAM J. Sci. Comput. 32, 3352–3393 (2010)

    MathSciNet  MATH  Google Scholar 

  48. Vasilescu, M., Terzopoulos, D.: Multilinear subspace analysis of image ensembles. In: IEEE Computer Society Conference Computer Vision and Pattern Recognition, vol. 2, IEEE, pp. 93–99 (2003)

  49. Wang, X., Che, M., Qi, L., Wei, Y.: Modified gradient dynamic approach to the tensor complementarity problem. Optim. Methods Softw. (2019). https://doi.org/10.1080/10556788.2019.1578766

    MathSciNet  MATH  Google Scholar 

  50. Wang, X., Che, M., Wei, Y.: Neural networks based approach solving multi-linear systems with \(\cal{M}\)-tensors. Neurocomputing 351, 33–42 (2019)

    Google Scholar 

  51. Wright, K.: Differential equations for the analytic singular value decomposition of a matrix. Numer. Math. 63, 283–295 (1992)

    MathSciNet  MATH  Google Scholar 

  52. Zabczyk, J.: Mathematical Control Theory: an Introduction. Birkhäuser, Basel (2015)

    MATH  Google Scholar 

  53. Zhang, Z., Aeron, S.: Exact tensor completion using t-SVD. IEEE Trans. Signal Process. 65, 1511–1526 (2017)

    MathSciNet  MATH  Google Scholar 

  54. Zielke, G.: Report on test matrices for generalized inverses. Computing 36, 105–162 (1986)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We thank the editor and two anonymous reviewers for their detailed and helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maolin Che.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

X. Wang: This author is partially supported by Shanghai Key Laboratory of Contemporary Applied Mathematics, Natural Science Foundation of Gansu Province and Innovative Ability Promotion Project in Colleges and Universities of Gansu Province 2019B-146. M. Che: This author is supported by the National Natural Science Foundation of China under Grant 11901471. Y. Wei: This author is supported by the National Natural Science Foundation of China under Grant 11771099 and Innovation Program of Shanghai Municipal Education Commission.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, X., Che, M. & Wei, Y. Tensor neural network models for tensor singular value decompositions. Comput Optim Appl 75, 753–777 (2020). https://doi.org/10.1007/s10589-020-00167-1

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-020-00167-1

Keywords

Mathematics Subject Classifications

Navigation