Skip to main content
Log in

On optimal low rank Tucker approximation for tensors: the case for an adjustable core size

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

Approximating high order tensors by low Tucker-rank tensors have applications in psychometrics, chemometrics, computer vision, biomedical informatics, among others. Traditionally, solution methods for finding a low Tucker-rank approximation presume that the size of the core tensor is specified in advance, which may not be a realistic assumption in many applications. In this paper we propose a new computational model where the configuration and the size of the core become a part of the decisions to be optimized. Our approach is based on the so-called maximum block improvement method for non-convex block optimization. Numerical tests on various real data sets from gene expression analysis and image compression are reported, which show promising performances of the proposed algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. We thank Professor Xiuzhen Huang of Arkansas State University for providing us this set of data.

  2. We thank Professor Marieke Timmerman for providing us some basic codes of the DIFFIT approach.

  3. A Matlab implementation of the ARD method is available at http://www.erpwavelab.org.

References

  1. Andersson, C.A., Bro, R.: Improving the speed of multi-way algorithms: Part I. Tucker3. Chemometr. Intell. Lab. 42, 93–103 (1998)

    Article  Google Scholar 

  2. Appellof, C.J., Davidson, E.R.: Strategies for analyzing data from video fluorometric monitoring of liquid chromatographic effluents. Anal. Chem. 53, 2053–2056 (1981)

    Article  Google Scholar 

  3. Aubry, A., De Maio, A., Jiang, B., Zhang, S.: Ambiguity function shaping for cognitive radar via complex quartic optimization. IEEE Trans. Sig. Process. 61, 5603–5619 (2013)

    Article  Google Scholar 

  4. Bader, B.W., Kolda, T.G.: Matlab tensor toolbox, version 2.5. http://csmr.ca.sandia.gov/~tgkolda/TensorToolbox (2012)

  5. Bertsekas, D.P.: Nonlinear Programming, 2nd edn. Athena Scientific, Belmont (1999)

    MATH  Google Scholar 

  6. Bro, R.: PARAFAC: tutorial and applications. Chemometr. Intell. Lab. 38, 149–171 (1997)

    Article  Google Scholar 

  7. Bro, R.: Multi-way analysis in the food industry: models, algorithms, and applications. Ph.D. Thesis, University of Amsterdam, Netherlands, and Royal Veterinary and Agricultural University, Denmark (1998)

  8. Bro, R., Kiers, H.A.L.: A new efficient method for determining the number of components in PARAFAC models. J. Chemom. 17, 274–286 (2003)

    Article  Google Scholar 

  9. Carroll, J.D., Chang, J.J.: Analysis of individual differences in multidimensional scaling via an N-way generalization of “Eckart-Young” decomposition. Psychometrika 35, 283–319 (1970)

    Article  MATH  Google Scholar 

  10. Ceulemans, E., Kiers, H.A.L.: Selecting among three-way principal component models of different types and complexities: a numerical convex hull based method. Br. J. Math. Stat. Psychol. 59, 133–150 (2006)

    Article  MathSciNet  Google Scholar 

  11. Ceulemans, E., Kiers, H.A.L.: Discriminating between strong and weak structures in three-mode principal component analysis. Br. J. Math. Stat. Psychol. 62, 601–620 (2009)

    Article  MathSciNet  Google Scholar 

  12. Chen, B.: Optimization with block variables: theory and applications. Ph.D. Thesis, The Chinese Univesrity of Hong Kong, Hong Kong (2012)

  13. Chen, B., He, S., Li, Z., Zhang, S.: Maximum block improvement and polynomial optimization. SIAM J. Optim. 22, 87–107 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  14. De Lathauwer, L., De Moor, B., Vandewalle, J.: A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 21, 1253–1278 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  15. De Lathauwer, L., De Moor, B., Vandewalle, J.: On the best rank-1 and rank-\((R_1, R_2,\dots, R_N)\) approximation of higher-order tensors. SIAM J. Matrix Anal. Appl. 21, 1324–1342 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  16. De Lathauwer, L., Nion, D.: Decompositions of a higher-order tensor in block terms—Part III: alternating least squares algorithms. SIAM J. Matrix Anal. Appl. 30, 1067–1083 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  17. Eldén, L., Savas, B.: A Newton–Grassmann method for computing the best multi-linear rank-\((r1, r2, r3)\) approximation of a tensor. SIAM J. Matrix Anal. Appl. 31, 248–271 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  18. Harshman, R.A.: Foundations of the PARAFAC procedure: models and conditions for an “explanatory” multi-modal factor analysis. UCLA Working Papers in Phonetics 16, pp. 1–84 (1970). http://publish.uwo.ca/~harshman/wpppfac0

  19. He, Z., Cichocki, A., Xie, S.: Efficient method for Tucker3 model selection. Electron. Lett. 45, 805–806 (2009)

    Article  Google Scholar 

  20. Ishteva, M., De Lathauwer, L., Absil, P.A., Van Huffel, S.: Differential-geometric Newton method for the best rank-\((R_1, R_2, R_3)\) approximation of tensors. Numer. Algorithms 51, 179–194 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  21. Kapteyn, A., Neudecker, H., Wansbeek, T.: An approach to n-mode components analysis. Psychometrika 51, 269–275 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  22. Kiers, H.A.L., Der Kinderen, A.: A fast method for choosing the numbers of components in Tucker3 analysis. Br. J. Math. Stat. Psychol. 56, 119–125 (2003)

    Article  Google Scholar 

  23. Kofidis, E., Regalia, P.A.: On the best rank-1 approximation of higher order supersymmetric tensors. SIAM J. Matrix Anal. Appl. 23, 863–884 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  24. Kolda, T.G.: Multilinear operators for higher-order decompositions. Technical Report SAND2006-2081, Sandia National Laboratories, Albuquerque (2006)

  25. Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 51, 455–500 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  26. Kroonenberg, P.M., De Leeuw, J.: Principal component analysis of three-mode data by means of alternating least squares algorithms. Psychometrika 45, 69–97 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  27. Leibovici, D., Sabatier, R.: A singular value decomposition of a k-way array for a principal component analysis of multiway data, PTA-k. Linear Algebra Appl. 269, 307–329 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  28. Levin, J.: Three-mode factor analysis. Ph.D. Thesis, University of Illinois, Urbana (1963)

  29. Li, Z., Uschmajew, A., Zhang, S.: On convergence of the maximum block improvement method. Technical Report (2013)

  30. Lyons, M.J., Akamatsu, S., Kamachi, M., Gyoba, J.: Coding facial expressions with gabor wavelets. In: Proceedings of the 3rd IEEE International Conference on Automatic Face and Gesture Recognition, pp. 200–205 (1998)

  31. Mørup, M., Hansen, L.K.: Automatic relevance determination for multi-way models. J. Chemom. 23, 352–363 (2009)

    Article  Google Scholar 

  32. Qi, L.: The best rank-one approximation ratio of a tensor space. SIAM J. Matrix Anal. Appl. 32, 430–442 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  33. Samaria, F., Harter, A.: Parameterisation of a stochastic model for human face identification. In: Proceedings of 2nd IEEE Workshop on Applications of Computer Vision, pp. 138–142 (1994)

  34. Sun, W., Yuan, Y.-X.: Optimization theory and methods: nonlinear programming. In: Springer Optimization and Its Applications, vol. 1. Springer, New York (2006)

  35. Timmerman, M.E., Kiers, H.A.L.: Three-mode principal components analysis: choosing the numbers of components and sensitivity to local optima. Br. J. Math. Stat. Psychol. 53, 1–16 (2000)

    Article  Google Scholar 

  36. Tomioka, R., Suzuki, T., Hayashi, K., Kashima, H.: Statistical performance of convex tensor decomposition. In: Proceedings of the 25th Annual Conference on Neural Information Processing Systems, pp. 972–980 (2011)

  37. Tucker, L.R.: Implications of factor analysis of three-way matrices for measurement of change. In: Harris, C.W. (ed.) Problems in Measuring Change, pp. 122–137. University of Wisconsin Press, Madison (1963)

  38. Tucker, L.R.: The extension of factor analysis to three-dimensional matrices. In: Gulliksen, H., Frederiksen, N. (eds.) Contributions to Mathematical Psychology. Holt, Rinehardt, & Winston, New York (1964)

    Google Scholar 

  39. Tucker, L.R.: Some mathematical notes on three-mode factor analysis. Psychometrika 31, 279–311 (1966)

    Article  MathSciNet  Google Scholar 

  40. Uschmajew, A.: Local convergence of the alternating least squares algorithm for canonical tensor approximation. SIAM J. Matrix Anal. Appl. 33, 639–652 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  41. Wang, Y., Qi, L.: On the successive supersymmetric rank-1 decomposition of higher-order supersymmetric tensors. Numer. Linear Algebra Appl. 14, 503–519 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  42. Zhang, S., Wang, K., Ashby, C., Chen, B., Huang, X.: A unified adaptive co-identification framework for high-D expression data. In: Shibuya, T., et al. (eds.) Proceedings of the 7th IAPR International Conference on Pattern Recognition in Bioinformatics. Lecture Notes in Computer Science, vol. 7632, pp. 59–70. Springer, New York (2012)

  43. Zhang, S., Wang, K., Chen, B., Huang, X.: A new framework for co-clustering of gene expression data. In: Loog, M., et al. (eds.) Proceedings of the 6th IAPR International Conference on Pattern Recognition in Bioinformatics. Lecture Notes in Computer Science, vol. 7036, pp. 1–12. Springer, New York (2011)

  44. Zhang, T., Golub, G.H.: Rank-one approximation to high order tensors. SIAM J. Matrix Anal. Appl. 23, 534–550 (2001)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgments

This work was partially supported by National Science Foundation of China (Grant 11301436 and 11371242), National Science Foundation of USA (Grant CMMI-1161242), Natural Science Foundation of Shanghai (Grant 12ZR1410100), and Ph.D. Programs Foundation of Chinese Ministry of Education (Grant 20123108120002). We would like to thank the anonymous referee for the insightful suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shuzhong Zhang.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, B., Li, Z. & Zhang, S. On optimal low rank Tucker approximation for tensors: the case for an adjustable core size. J Glob Optim 62, 811–832 (2015). https://doi.org/10.1007/s10898-014-0231-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-014-0231-x

Keywords

Navigation