Skip to main content
Log in

Low-rank matrix recovery via novel double nonconvex nonsmooth rank minimization with ADMM

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

In recent years, using the low-rank structure of the data to recover the matrix has become one of the hot issues in image processing. However, it usually leads to a suboptimal solution because current convex relaxations of the rank function (e.g., nuclear norm, etc.) reduce the rank components excessively and treat each rank component equally. To solve this problem, many nonconvex relaxations have been proposed, but their convergence properties are generally not easy to guarantee. In this paper, we construct a novel double nonconvex nonsmooth model by combining the truncated Schatten-p norm and the Schatten-p norm for the first time, called N-DNNR. It ignores the effect of large singular values on the matrix rank through the truncated Schatten-p norm and approximates the rank function by the Schatten-p norm. The alternating directional multiplier method (ADMM) is introduced to solve the N-DNNR model, which is more robust than other state-of-the-art algorithms and guarantees global convergence. In particular, updating the variable X is a nonconvex optimization problem, and we solve it by the approximate gradient algorithm (PG) and the weighted singular value threshold operator (WSVF), which has a closed-form solution. Finally, experimental results on synthetic data and real images show that the N-DNNR model has stronger generalization and recovery capabilities than the TSPN model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Boyd S, Parikh N, Chu E: Distributed Optimization and Statistical Learning Via the Alternating Direction Method of Multipliers. Now Publishers Inc, ??? (2011)

  2. Cabral R, De la Torre F, Costeira JP, Bernardino A: Unifying nuclear norm and bilinear factorization approaches for low-rank matrix decomposition. In: Proceedings of the IEEE International Conference on Computer Vision, pp 2488–2495 (2013)

  3. Cao W, Sun J, Xu Z (2013) Fast image deconvolution using closed-form thresholding formulas of lq (\(q=1/2, 2/3\)) regularization. J Vis Commun Image Represent 24(1):31–41

    Article  Google Scholar 

  4. Chen B, Sun H, Xia G, Feng L, Li B (2018) Human motion recovery utilizing truncated Schatten \(p\)-norm and kinematic constraints. Inf Sci 450:89–108

    Article  MathSciNet  Google Scholar 

  5. Dorffer C, Puigt M, Delmaire G, Roussel G: Fast nonnegative matrix factorization and completion using nesterov iterations. In: International Conference on Latent Variable Analysis and Signal Separation, pp 26–35 (2017). Springer

  6. Feng L, Sun H, Sun Q, Xia G (2016) Image compressive sensing via truncated Schatten-\(p\) norm regularization. Signal Processing: Image Communication 47:28–41

    Google Scholar 

  7. Gao C, Wang N, Yu Q, Zhang Z: A feasible nonconvex relaxation approach to feature selection. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol 25 (2011)

  8. Gu S, Zhang L, Zuo W, Feng X: Weighted nuclear norm minimization with application to image denoising. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 2862–2869 (2014)

  9. Gu S, Xie Q, Meng D, Zuo W, Feng X, Zhang L (2017) Weighted nuclear norm minimization and its applications to low level vision. Int J Comput Vision 121(2):183–208

    Article  Google Scholar 

  10. Hastie T, Mazumder R, Lee JD, Zadeh R (2015) Matrix completion and low-rank svd via fast alternating least squares. The Journal of Machine Learning Research 16(1):3367–3402

    MathSciNet  Google Scholar 

  11. Hu Y, Zhang D, Ye J, Li X, He X (2012) Fast and accurate matrix completion via truncated nuclear norm regularization. IEEE Trans Pattern Anal Mach Intell 35(9):2117–2130

    Article  Google Scholar 

  12. Kumar A (2022) Biological tomato leaf disease classification using deep learning framework. International Journal of Biology and Biomedical Engineering 16:241–244

    Article  Google Scholar 

  13. Kumar A (2022) Learning texture features from glcm for classification of brain tumor mri images using random forest classifier. WSEAS Transactions on Signal Processing 18:60–63

    Article  MathSciNet  Google Scholar 

  14. Li S, He J, Yi S: Application of weighted nuclear norm denoising algorithm in diffusion-weighted image. In: International Conference on Mechatronics and Intelligent Robotics, pp 258–263 (2018). Springer

  15. Lin Z, Chen M, Ma Y: The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. arXiv preprint arXiv:1009.5055 (2010)

  16. Liu Q, Shen X, Gu Y (2019) Linearized admm for nonconvex nonsmooth optimization with convergence analysis. IEEE Access 7:76131–76144

    Article  Google Scholar 

  17. Lu C, Zhu C, Xu C, Yan S, Lin Z: Generalized singular value thresholding. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol 29 (2015)

  18. Lu C, Lin Z, Yan S (2014) Smoothed low rank and sparse matrix recovery by iteratively reweighted least squares minimization. IEEE Trans Image Process 24(2):646–654

    MathSciNet  Google Scholar 

  19. Lu C, Tang J, Yan S, Lin Z (2015) Nonconvex nonsmooth low rank minimization via iteratively reweighted nuclear norm. IEEE Trans Image Process 25(2):829–839

    Article  MathSciNet  Google Scholar 

  20. Luo L, Yang J, Qian J, Tai Y, Lu G-F (2016) Robust image regression based on the extended matrix variate power exponential distribution of dependent noise. IEEE Transactions on Neural Networks and Learning Systems 28(9):2168–2182

    Article  MathSciNet  Google Scholar 

  21. Malek-Mohammadi M, Babaie-Zadeh M, Skoglund M (2015) Performance guarantees for Schatten-\(p\) quasi-norm minimization in recovery of low-rank matrices. Signal Process 114:225–230

    Article  Google Scholar 

  22. Nie F, Huang H, Ding C: Low-rank matrix recovery via efficient schatten \(p\)-norm minimization. In: Twenty-sixth AAAI Conference on Artificial Intelligence (2012)

  23. Peng X, Lu C, Yi Z, Tang H (2016) Connections between nuclear-norm and Frobenius-norm-based representations. IEEE Transactions on Neural Networks and Learning Systems 29(1):218–224

    Article  MathSciNet  Google Scholar 

  24. Sheikh HR, Sabir MF, Bovik AC (2006) A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans Image Process 15(11):3440–3451

    Article  Google Scholar 

  25. Tang C, Zhu X, Liu X, Li M, Wang P, Zhang C, Wang L (2019) Learning a joint affinity graph for multiview subspace clustering. IEEE Trans Multimedia 21(7):1724–1736

    Article  Google Scholar 

  26. Tang C, Liu X, Zhu X, Xiong J, Li M, Xia J, Wang X, Wang L (2020) Feature selective projection with low-rank embedding and dual Laplacian regularization. IEEE Trans Knowl Data Eng 32(9):1747–1760

    Google Scholar 

  27. Toh K-C, Yun S (2010) An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems. Pacific Journal of optimization 6(615–640):15

    MathSciNet  Google Scholar 

  28. Volodina O, Nasonov A, Krylov A (2020) Choice of parameters in the weighted nuclear norm method for image denoising. Comput Math Model 31(3):402–409

    Article  MathSciNet  Google Scholar 

  29. Wang C, Zhang J, Shi G: Discriminative low-rank representation with schatten- p norm for image recognition. Multimedia Tools and Applications (1) (2019)

  30. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612

    Article  Google Scholar 

  31. Wang S, Ge H, Yang J, Tong Y (2020) Relaxed group low rank regression model for multi-class classification. Multimedia Tools and Applications 4:1–19

    Google Scholar 

  32. Wen C, Qian W, Zhang Q, Cao F (2021) Algorithms of matrix recovery based on truncated Schatten p-norm. Int J Mach Learn Cybern 12(5):1557–1570

    Article  Google Scholar 

  33. Xu D, Li Z, Wu W, Ding X, Qu D: Convergence of gradient descent algorithm for a recurrent neuron. In: International Symposium on Neural Networks, pp 117–122 (2007). Springer

  34. Xue Z, Dong J, Zhao Y, Liu C, Chellali R (2019) Low-rank and sparse matrix decomposition via the truncated nuclear norm and a sparse regularizer. Vis Comput 35(11):1549–1566

    Article  Google Scholar 

  35. Yang Z, Yang Z, Han D (2018) Alternating direction method of multipliers for sparse and low-rank decomposition based on nonconvex nonsmooth weighted nuclear norm. IEEE Access 6:56945–56953

    Article  Google Scholar 

  36. Yi J, Xu W (2020) Necessary and sufficient null space condition for nuclear norm minimization in low-rank matrix recovery. IEEE Trans Inf Theory 66(10):6597–6604

    Article  MathSciNet  Google Scholar 

  37. Zhang C-H (2010) Nearly unbiased variable selection under minimax concave penalty. Ann Stat 38(2):894–942

    Article  MathSciNet  Google Scholar 

  38. Zhang M, Huang Z-H, Zhang Y (2013) Restricted \(p\)-isometry properties of nonconvex matrix recovery. IEEE Trans Inf Theory 59(7):4316–4323

    Article  MathSciNet  Google Scholar 

  39. Zhang H, Yang J, Shang F, Gong C, Zhang Z (2018) LRR for subspace segmentation via tractable Schatten-\(p\) norm minimization and factorization. IEEE Transactions on Cybernetics 49(5):1722–1734

    Article  Google Scholar 

  40. Zhang H, Qian J, Zhang B, Yang J, Gong C, Wei Y (2019) Low-rank matrix recovery via modified Schatten-\(p\) norm minimization with convergence guarantees. IEEE Trans Image Process 29:3132–3142

    Article  MathSciNet  Google Scholar 

  41. Zhang H, Gong C, Qian J, Zhang B, Xu C, Yang J (2019) Efficient recovery of low-rank matrix via double nonconvex nonsmooth rank minimization. IEEE Transactions on Neural Networks and Learning Systems 30(10):2916–2925

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China Grants 62176037, 62002041, 62205045, by the Liaoning Applied Basic Research Project Grant 2022JH2/101300264, by the Liaoning Fundamental Research Funds for Universities Grant LJKQZ2021010, by the Dalian Science and Technology Innovation Fund Grants 2022JJ12GX016, 2022JJ12GX019.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Yunjie Zhang or Xianping Fu.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, Y., Zhang, Y. & Fu, X. Low-rank matrix recovery via novel double nonconvex nonsmooth rank minimization with ADMM. Multimed Tools Appl 83, 15547–15564 (2024). https://doi.org/10.1007/s11042-023-16098-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-023-16098-y

Keywords

Navigation