Advertisement

Neural Computing and Applications

, Volume 31, Issue 12, pp 8327–8335 | Cite as

Unsupervised learning low-rank tensor from incomplete and grossly corrupted data

  • Zhijun Meng
  • Yaoming ZhouEmail author
  • Yongjia Zhao
Machine Learning - Applications & Techniques in Cyber Intelligence

Abstract

Low-rank tensor completion and recovery have received considerable attention in the recent literature. The existing algorithms, however, are prone to suffer a failure when the multiway data are simultaneously contaminated by arbitrary outliers and missing values. In this paper, we study the unsupervised tensor learning problem, in which a low-rank tensor is recovered from an incomplete and grossly corrupted multidimensional array. We introduce a unified framework for this problem by using a simple equation to replace the linear projection operator constraint, and further reformulate it as two convex optimization problems through different approximations of the tensor rank. Two globally convergent algorithms, derived from the alternating direction augmented Lagrangian (ADAL) and linearized proximal ADAL methods, respectively, are proposed for solving these problems. Experimental results on synthetic and real-world data validate the effectiveness and superiority of our methods.

Keywords

Unsupervised learning Low-rank tensor Tensor recovery Convex optimization Alternating direction augmented Lagrangian (ADAL) 

Notes

Acknowledgements

This work was supported by National Natural Science Foundation of China Nos. 61702023 and 91538204.

Compliance with ethical standards

Conflict of interest

The authors declare that they have no competing interests.

References

  1. 1.
    Acar E, Dunlavy DM, Kolda TG, Mørup M (2010) Scalable tensor factorizations with missing data. In: SIAM international conference on data mining, pp 701–712Google Scholar
  2. 2.
    Cai JF, CandèS EJ, Shen Z (2010) A singular value thresholding algorithm for matrix completion. SIAM J Optim 20(4):1956–1982MathSciNetCrossRefGoogle Scholar
  3. 3.
    Candès EJ, Li X, Ma Y, Wright J (2011) Robust principal component analysis? J ACM 58(3):1–37MathSciNetCrossRefGoogle Scholar
  4. 4.
    Chen Y, Jalali A, Sanghavi S, Caramanis C (2013) Low-rank matrix recovery from errors and erasures. IEEE Trans Inf Theory 59(7):4324–4337CrossRefGoogle Scholar
  5. 5.
    De Lathauwer L, Vandewalle J (2004) Dimensionality reduction in higher-order signal processing and rank-(\(r_{1}\),\(r_{2}\), \(\cdots\), \(r_{N}\)) reduction in multilinear algebra. Linear Algebra Appl 391(1):31–55MathSciNetCrossRefGoogle Scholar
  6. 6.
    Eckstein J, Bertsekas DP (1992) On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math Program 55(1–3):293–318MathSciNetCrossRefGoogle Scholar
  7. 7.
    Filipović M, Jukić A (2015) Tucker factorization with missing data with application to low-\(n\)-rank tensor completion. Multidimens Syst Signal Process 26(3):1–16CrossRefGoogle Scholar
  8. 8.
    Gandy S, Recht B, Yamada I (2011) Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse Probl 27(2):025010MathSciNetCrossRefGoogle Scholar
  9. 9.
    Goldfarb D, Qin Z (2014) Robust low-rank tensor recovery: models and algorithms. SIAM J Matrix Anal Appl 35(1):225–253MathSciNetCrossRefGoogle Scholar
  10. 10.
    Harshman RA (2009) Foundations of the PARAFAC procedure: model and conditions for an “explanatory” multi-mode factor analysis. UCLA Work Pap Phonetic 16:1–84Google Scholar
  11. 11.
    Huang B, Mu C, Goldfarb D, Wright J (2014) Provable low-rank tensor recovery. In: Optimization Online, p 4252Google Scholar
  12. 12.
    Kolda TG, Sun J (2008) Scalable tensor decompositions for multi-aspect data mining. In: 8th IEEE international conference on data mining, pp 363–372Google Scholar
  13. 13.
    Lee KC, Ho J, Kriegman DJ (2005) Acquiring linear subspaces for face recognition under variable lighting. IEEE Trans Pattern Anal Mach Intell 27(5):684–698CrossRefGoogle Scholar
  14. 14.
    Li X (2013) Compressed sensing and matrix completion with constant proportion of corruptions. Constr Approx 37(1):73–99MathSciNetCrossRefGoogle Scholar
  15. 15.
    Li Y, Yan J, Zhou Y, Yang J (2010) Optimum subspace learning and error correction for tensors. In: European conference on computer vision conference on computer vision, pp 790–803Google Scholar
  16. 16.
    Liu J, Musialski P, Wonka P, Ye J (2009) Tensor completion for estimating missing values in visual data. In: IEEE international conference on computer vision, pp 2114–2121Google Scholar
  17. 17.
    Liu J, Musialski P, Wonka P, Ye J (2013) Tensor completion for estimating missing values in visual data. IEEE Trans Pattern Anal Mach Intell 35(1):208–220CrossRefGoogle Scholar
  18. 18.
    Mu C, Huang B, Wright J, Goldfarb D (2013) Square deal: lower bounds and improved relaxations for tensor recovery. In: International conference on machine learning, pp 73–81Google Scholar
  19. 19.
    Recht B (2011) A simpler approach to matrix completion. J Mach Learn Res 12(4):3413–3430MathSciNetzbMATHGoogle Scholar
  20. 20.
    Shang F, Liu Y, Cheng J, Cheng H (2014) Robust principal component analysis with missing data. In: Proceedings of the 23rd ACM international conference on information and knowledge management, Shanghai, China, pp 1149–1158Google Scholar
  21. 21.
    Signoretto M, Dinh QT, Lathauwer LD, Suykens JAK (2014) Learning with tensors: a framework based on convex optimization and spectral regularization. Mach Learn 94(3):303–351MathSciNetCrossRefGoogle Scholar
  22. 22.
    Signoretto M, Lathauwer LD, Suykens JAK (2010) Nuclear norms for tensors and their use for convex multilinear estimation. Technical Report 10-186, ESAT-SISTA, K. U. Leuven, Leuven, BelgiumGoogle Scholar
  23. 23.
    Tan H, Cheng B, Feng J, Feng G, Wang W, Zhang Y-J (2013) Low-\(n\)-rank tensor recovery based on multi-linear augmented lagrange multiplier method. Neurocomputing 119:144–152CrossRefGoogle Scholar
  24. 24.
    Tomioka R, Hayashi K, Kashima H (2010) Estimation of low-rank tensors via convex optimization. arXiv:1010.0789
  25. 25.
    Tomioka R, Suzuki T, Hayashi K, Kashima H (2011) Statistical performance of convex tensor decomposition. In: Advances in neural information processing systems, pp 972–980Google Scholar
  26. 26.
    Tucker LR (1966) Some mathematical notes on three-mode factor analysis. Psychometrika 31(3):279–311MathSciNetCrossRefGoogle Scholar
  27. 27.
    Yang J, Yuan X (2013) Linearized augmented lagrangian and alternating direction methods for nuclear norm minimization. Math Comput 82(281):301–329MathSciNetCrossRefGoogle Scholar
  28. 28.
    Yang J, Zhang Y (2011) Alternating direction algorithms for \(ell _1\)-problems in compressive sensing. SIAM J Sci Comput 35(1):250–278CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2018

Authors and Affiliations

  1. 1.School of Aeronautic Science and EngineeringBeihang UniversityBeijingChina
  2. 2.School of Automation Science and Electrical EngineeringBeihang UniversityBeijingChina
  3. 3.The State Key Laboratory of Virtual Reality Technology and SystemsBeijingChina

Personalised recommendations