Skip to main content
Log in

A general multi-factor norm based low-rank tensor completion framework

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Low-rank tensor completion aims to recover the missing entries of the tensor from its partially observed data by using the low-rank property of the tensor. Since rank minimization is an NP-hard problem, the convex surrogate nuclear norm is usually used to replace the rank norm and has obtained promising results. However, the nuclear norm is not a tight envelope of the rank norm and usually over-penalizes large singular values. In this paper, inspired by the effectiveness of the matrix Schatten-q norm, which is a tighter approximation of rank norm when 0 < q <  1, we generalize the matrix Schatten-q norm to tensor case and propose a Unitary Transformed Tensor Schatten-q Norm (UTT-Sq) with an arbitrary unitary transform matrix. More importantly, the factor tensor norm surrogate theorem is derived. We prove large-scale UTT-Sq norm (which is nonconvex and not tractable when 0 < q < 1) is equivalent to minimizing the weighted sum formulation of multiple small-scale UTT-\(S_{q_{i}}\) (with different qi and qi ≥ 1). Based on this equivalence, we propose a low-rank tensor completion framework using Unitary Transformed Tensor Multi-Factor Norm (UTTMFN) penalty. The optimization problem is solved using the Alternating Direction Method of Multipliers (ADMM) with the proof of convergence. Experimental results on synthetic data, images and videos show that the proposed UTTMFN can achieve competitive results with the state-of-the-art methods for tensor completion.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Algorithm 1
Algorithm 2
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Data Availability

The data supporting the study’s findings are available from the corresponding author, lianyi_1999@nuaa.edu.cn or jltian@nuaa.edu.cn, upon reasonable request.

Notes

  1. https://www.dropbox.com/s/npyc2t5zkjlb7tt/CodeBFMNM.zip?dl=0

  2. http://github.com/canyilu.

  3. https://github.com/TaiXiangJiang/PSTNN.

  4. https://github.com/linchenee/LRMF-LRTF.

  5. http://trace.eas.asu.edu/yuv/

References

  1. Song G, Ng MK, Zhang X (2020) Robust tensor completion using transformed tensor singular value decomposition. Numer Linear Alg Appl 27(3):2299. https://doi.org/10.1002/nla.2299

    Article  MathSciNet  MATH  Google Scholar 

  2. Ji T-Y, Huang T-Z, Zhao X-L, Ma T-H, Deng L-J (2017) A non-convex tensor rank approximation for tensor completion. Appl Math Model 48:410–422. https://doi.org/10.1016/j.apm.2017.04.002

    Article  MathSciNet  MATH  Google Scholar 

  3. Hosono K, Ono S, Miyata T (2019) Weighted tensor nuclear norm minimization for color image restoration. IEEE Access 7:88768–88776. https://doi.org/10.1109/ACCESS.2019.2926507

    Article  Google Scholar 

  4. Du S, Liu B, Shan G, Shi Y, Wang W (2022) Enhanced tensor low-rank representation for clustering and denoising. Knowl-Based Syst 243:108468. https://doi.org/10.1016/j.knosys.2022.108468

    Article  Google Scholar 

  5. Zhang H, Qian J, Zhang B, Yang J, Gong C, Wei Y (2019) Low-rank matrix recovery via modified schatten-p norm minimization with convergence guarantees. IEEE Trans Image Process 29:3132–3142. https://doi.org/10.1109/TIP.2019.2957925

    Article  MathSciNet  MATH  Google Scholar 

  6. Hu Z, Nie F, Wang R, Li X (2021) Low rank regularization: a review. Neural Netw 136:218–232. https://doi.org/10.1016/j.neunet.2020.09.021

    Article  Google Scholar 

  7. Shang F, Cheng J, Liu Y, Luo Z-Q, Lin Z (2017) Bilinear factor matrix norm minimization for robust pca: Algorithms and applications. IEEE Trans Pattern Anal Mach Intell 40(9):2066–2080. https://doi.org/10.1109/TPAMI.2017.2748590

    Article  Google Scholar 

  8. Shi C, Huang Z, Wan L, Xiong T (2021) Low-rank tensor completion based on non-convex logdet function and tucker decomposition. Signal Image Video Process 15(6):1169–1177. https://doi.org/10.1007/s11760-020-01845-7

    Article  Google Scholar 

  9. Kong H, Xie X, Lin Z (2018) t-schatten-p norm for low-rank tensor recovery. IEEE J Sel Top Signal Process 12 (6):1405–1419. https://doi.org/10.1109/JSTSP.2018.2879185

    Article  Google Scholar 

  10. Hitchcock FL (1927) The expression of a tensor or a polyadic as a sum of products. J Math Phys 6(1-4):164–189. https://doi.org/10.1002/sapm192761164

    Article  MATH  Google Scholar 

  11. Tucker LR (1966) Some mathematical notes on three-mode factor analysis. Psychometrika 31 (3):279–311. https://doi.org/10.1007/BF02289464

    Article  MathSciNet  Google Scholar 

  12. Zhao Q, Zhou G, Xie S, Zhang L, Cichocki A (2016) Tensor ring decomposition. Preprint at https://doi.org/10.48550/arXiv.1606.05535

  13. Bengua JA, Phien HN, Tuan HD, Do MN (2017) Efficient tensor completion for color image and video recovery: low-rank tensor train. IEEE Trans Image Process 26(5):2466–2479. https://doi.org/10.1109/TIP.2017.2672439

    Article  MathSciNet  MATH  Google Scholar 

  14. Zhang Z, Ely G, Aeron S, Hao N, Kilmer M (2014) Novel methods for multilinear data completion and de-noising based on tensor-svd. :3842–3849

  15. Carroll JD, Chang J-J (1970) Analysis of individual differences in multidimensional scaling via an n-way generalization of “eckart-young” decomposition. Psychometrika 35(3):283–319. https://doi.org/10.1007/BF02310791

    Article  MATH  Google Scholar 

  16. Kilmer ME, Martin CD (2011) Factorization strategies for third-order tensors. Linear Alg Appl 435(3):641–658. https://doi.org/10.1016/j.laa.2010.09.020

    Article  MathSciNet  MATH  Google Scholar 

  17. Acar E, Dunlavy DM, Kolda TG, Mørup M (2011) Scalable tensor factorizations for incomplete data. Chemom Intell Lab Syst 106(1):41–56. https://doi.org/10.1016/j.chemolab.2010.08.004

    Article  Google Scholar 

  18. Liu J, Musialski P, Wonka P, Ye J (2012) Tensor completion for estimating missing values in visual data. IEEE Trans Pattern Anal Mach Intell 35(1):208–220. https://doi.org/10.1109/TPAMI.2012.39

    Article  Google Scholar 

  19. Yu J, Zhou G, Li C, Zhao Q, Xie S (2020) Low tensor-ring rank completion by parallel matrix factorization. IEEE Trans Neural Netw Learn Syst 32(7):3020–3033. https://doi.org/10.1109/TNNLS.2020.3009210

    Article  MathSciNet  Google Scholar 

  20. He J, Zheng X, Gao P, Zhou Y (2022) Low-rank tensor completion based on tensor train rank with partially overlapped sub-blocks. Signal Process 190:108339. https://doi.org/10.1016/j.sigpro.2021.108339

    Article  Google Scholar 

  21. Kong H, Lu C, Lin Z (2021) Tensor q-rank: New data dependent definition of tensor rank. Mach Learn 110(7):1867–1900. https://doi.org/10.1007/s10994-021-05987-8

    Article  MathSciNet  MATH  Google Scholar 

  22. Lu C, Peng X, Wei Y (2019) Low-rank tensor completion with a new tensor nuclear norm induced by invertible linear transforms. :5996–6004

  23. Kernfeld E, Kilmer M, Aeron S (2015) Tensor–tensor products with invertible linear transforms. Linear Alg Appl 485:545–570. https://doi.org/10.1016/j.laa.2015.07.021

    Article  MathSciNet  MATH  Google Scholar 

  24. Li B-Z, Zhao X-L, Ji T-Y, Zhang X-J, Huang T-Z (2021) Nonlinear transform induced tensor nuclear norm for tensor completion. arXiv:2110.08774. Preprint at https://doi.org/10.48550/arXiv.2110.08774

  25. Zhou P, Lu C, Lin Z, Zhang C (2017) Tensor factorization for low-rank tensor completion. IEEE Trans Image Process 27(3):1152–1163. https://doi.org/10.1109/TIP.2017.2762595

    Article  MathSciNet  MATH  Google Scholar 

  26. Xu Y (2017) Fast algorithms for higher-order singular value decomposition from incomplete data. J Comput Math 35(4):397–422. https://doi.org/10.4208/jcm.1608-m2016-0641

    Article  MathSciNet  MATH  Google Scholar 

  27. Du S, Xiao Q, Shi Y, Cucchiara R, Ma Y (2021) Unifying tensor factorization and tensor nuclear norm approaches for low-rank tensor completion. Neurocomputing 458:204–218. https://doi.org/10.1016/j.neucom.2021.06.020

    Article  Google Scholar 

  28. Jiang T-X, Huang T-Z, Zhao X-L, Deng L-J (2020) Multi-dimensional imaging data recovery via minimizing the partial sum of tubal nuclear norm. J Comput Appl Math 372:112680. https://doi.org/10.1016/j.cam.2019.112680

    Article  MathSciNet  MATH  Google Scholar 

  29. Xu W-H, Zhao X-L, Ji T-Y, Miao J-Q, Ma T-H, Wang S, Huang T-Z (2019) Laplace function based nonconvex surrogate for low-rank tensor completion. Signal Process Image Commun 73:62–69. https://doi.org/10.1016/j.image.2018.11.007

    Article  Google Scholar 

  30. Shi C, Huang Z, Wan L, Xiong T (2019) Low-rank tensor completion based on log-det rank approximation and matrix factorization. J Sci Comput 80(3):1888–1912. https://doi.org/10.1007/s10915-019-01009-x

    Article  MathSciNet  MATH  Google Scholar 

  31. Wang H, Zhang F, Wang J, Huang T, Huang J, Liu X (2021) Generalized nonconvex approach for low-tubal-rank tensor recovery. IEEE Transactions on Neural Networks and Learning Systems. https://doi.org/10.1109/TNNLS.2021.3051650

  32. Lu C, Feng J, Chen Y, Liu W, Lin Z, Yan S (2016) Tensor robust principal component analysis: Exact recovery of corrupted low-rank tensors via convex optimization. :5249–5257

  33. Trzasko J, Manduca A (2008) Highly undersampled magnetic resonance image reconstruction via homotopic l0-minimization. IEEE Trans Med Imaging 28 (1):106–121. https://doi.org/10.1109/TMI.2008.927346

    Article  Google Scholar 

  34. Fazel M, Hindi H, Boyd SP (2003) Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. 3:2156–2162. https://doi.org/10.1109/ACC.2003.1243393

  35. Gao C, Wang N, Yu Q, Zhang Z (2011) A feasible nonconvex relaxation approach to feature selection. 25(1):356–361. https://ojs.aaai.org/index.php/AAAI/article/view/7921

  36. Gu S, Zhang L, Zuo W, Feng X (2014) Weighted nuclear norm minimization with application to image denoising. :2862–2869

  37. Shang F, Liu Y, Shang F, Liu H, Kong L, Jiao L (2020) A unified scalable equivalent formulation for schatten quasi-norms. Mathematics 8(8):1325. https://doi.org/10.3390/math8081325

    Article  Google Scholar 

  38. Wang F, Cao W, Xu Z (2018) Convergence of multi-block bregman admm for nonconvex composite problems. Sci China Inf Sci 61(12):1–12. https://doi.org/10.1007/s11432-017-9367-6

    Article  MathSciNet  Google Scholar 

  39. Epton MA (1980) Methods for the solution of axd- bxc= e and its application in the numerical solution of implicit ordinary differential equations. BIT Numer Math 20(3):341–345. https://doi.org/10.1007/BF01932775

    Article  MathSciNet  MATH  Google Scholar 

  40. Cai J-F, Candès EJ, Shen Z (2010) A singular value thresholding algorithm for matrix completion. SIAM J Optim 20(4):1956–1982. https://doi.org/10.1137/080738970

    Article  MathSciNet  MATH  Google Scholar 

  41. Zuo W, Meng D, Zhang L, Feng X, Zhang D (2013) A generalized iterated shrinkage algorithm for non-convex sparse coding. :217–224

  42. Chen C, He B, Ye Y, Yuan X (2016) The direct extension of admm for multi-block convex minimization problems is not necessarily convergent. Math Program 155(1):57–79. https://doi.org/10.1007/s10107-014-0826-5

    Article  MathSciNet  MATH  Google Scholar 

  43. Jiang T-X, Zhao X-L, Zhang H, Ng MK (2021) Dictionary learning with low-rank coding coefficients for tensor completion. IEEE Transactions on Neural Networks and Learning Systems. https://doi.org/10.1109/TNNLS.2021.3104837

  44. Chen L, Jiang X, Liu X, Zhou Z (2021) Logarithmic norm regularized low-rank factorization for matrix and tensor completion. IEEE Trans Image Process 30:3434–3449. https://doi.org/10.1109/TIP.2021.3061908

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yulian Zhu.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tian, J., Zhu, Y. & Liu, J. A general multi-factor norm based low-rank tensor completion framework. Appl Intell 53, 19317–19337 (2023). https://doi.org/10.1007/s10489-023-04477-9

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-023-04477-9

Keywords

Navigation