Skip to main content
Log in

\(\ell _p\) Regularized low-rank approximation via iterative reweighted singular value minimization

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

In this paper we study the \(\ell _p\) (or Schatten-p quasi-norm) regularized low-rank approximation problems. In particular, we introduce a class of first-order stationary points for them and show that any local minimizer of these problems must be a first-order stationary point. In addition, we derive lower bounds for the nonzero singular values of the first-order stationary points and hence also of the local minimizers of these problems. The iterative reweighted singular value minimization (IRSVM) methods are then proposed to solve these problems, whose subproblems are shown to have a closed-form solution. Compared to the analogous methods for the \(\ell _p\) regularized vector minimization problems, the convergence analysis of these methods is significantly more challenging. We develop a novel approach to establishing the convergence of the IRSVM methods, which makes use of the expression of a specific solution of their subproblems and avoids the intricate issue of finding the explicit expression for the Clarke subdifferential of the objective of their subproblems. In particular, we show that any accumulation point of the sequence generated by the IRSVM methods is a first-order stationary point of the problems. Our computational results demonstrate that the IRSVM methods generally outperform the recently developed iterative reweighted least squares methods in terms of solution quality and/or speed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. By convention, the symbol “\(\mathrm{Arg}\)” stands for the set of the solutions of the associated optimization problem. When this set is known to be a singleton, we use the symbol “\(\arg \)” to stand for it instead.

  2. \(\psi :\mathfrak {R}^l \rightarrow \mathfrak {R}\) is an absolutely symmetric function if \(\psi (x_1,x_2\ldots ,x_l) = \psi (|x_{\pi (1)}|,|x_{\pi (2)}|,\ldots ,|x_{\pi (l)}|)\) for any permutation \(\pi \).

References

  1. Barzilai, J., Borwein, J.M.: Two point step size gradient methods. IMA J. Numer. Anal. 8, 141–148 (1988)

    Article  MATH  MathSciNet  Google Scholar 

  2. Bertalmío, M., Sapiro, G., Caselles, V., Ballester C.: Image inpainting. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH) (2000)

  3. Bian, W., Chen, X.: Worst-case complexity of smoothing quadratic regularization methods for non-Lipschitzian optimization, SIAM J. Optim. 23, 1718–1741 (2013)

  4. Birgin, E.G., Martínez, J.M., Raydan, M.: Nonmonotone spectral projected gradient methods on convex sets. SIAM J. Optim. 4, 1196–1211 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  5. Cai, J.-F., Candés, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20, 1956–1982 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  6. Candés, E.J., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9, 717–772 (2009)

  7. Candés, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted \(\ell _1\) minimization. J. Fourier Anal. Appl. 14(5–6), 877–905 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  8. Chartrand, R.: Exact reconstruction of sparse signals via nonconvex minimization. IEEE Signal Process. Lett. 14, 707–710 (2007)

    Article  Google Scholar 

  9. Chartrand, R., Staneva, V.: Restricted isometry properties and nonconvex compressive sensing. Inverse Probl. 24, 1–14 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  10. Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 33rd International Conference on Acoustics, Speech, and Signal Processing (ICASSP) (2008)

  11. Chen, X., Ge, D., Wang, Z., Ye, Y.: Complexity of unconstrained \(L_2\)-\(L_p\) minimization. Math. Program. 143(1), 371–383 (2014)

    Article  MATH  MathSciNet  Google Scholar 

  12. Chen, X., Niu, L., Yuan, Y.: Optimality conditions and a smoothing trust region Newton method for non-Lipschitz optimization. SIAM J. Optim. 23(3), 1528–1552 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  13. Chen, X., Xu, F., Ye, Y.: Lower bound theory of nonzero entries in solutions of \(l_2\)-\(l_p\) minimization. SIAM J. Sci. Comput. 32, 2832–2852 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  14. Chen, X., Zhou, W.: Convergence of reweighted \(l_1\) minimization algorithm for \(l_2\)-\(l_p\) minimization. Comput. Optim. Appl. 59, 47–61 (2014)

    Article  MathSciNet  Google Scholar 

  15. Daubechies, I., DeVore, R., Fornasier, M., Gunturk, C.S.: Iteratively reweighted least squares minimization for sparse recovery. Commun. Pure. Appl. Math. 63, 1–38 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  16. Fazel, M., Hindi, H., Boyd, S.P.: A rank minimization heuristic with application to minimum order system approximation. P. Am. Contr. Conf. 6, 4734–4739 (2001)

    Article  Google Scholar 

  17. Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 96(456), 1348–1360 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  18. Foucart, S., Lai, M.: Sparsest solutions of underdetermined linear systems via \(l_q\)-minimization for \(0 < q \le 1\). Appl. Comput. Harmon. Anal. 26, 395–407 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  19. Fornasier, M., Rauhut, H., Ward, R.: Low-rank matrix recovery via iteratively reweighted least squares minimization. SIAM J. Optim. 21, 1614–1640 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  20. Ge, D., Jiang, X., Ye, Y.: A note on the complexity of \(L_p\) minimization. Math. Program. 21, 1721–1739 (2011)

    MathSciNet  Google Scholar 

  21. Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: The 30th International Conference on Machine Learning (ICML) (2013)

  22. Ji, S., Sze, K.-F., Zhou, Z., So, A., Ye, Y.: Beyond convex relaxation: a polynomialtime nonconvex optimization approach to network localization. In: INFOCOM (2013)

  23. Kong, L., Xiu, N.: Exact low-rank matrix recovery via nonconvex Schatten \(p\)-minimization. Asia Pac. J. Oper. Res. 30(3), 1340010 (2013)

    Article  MATH  Google Scholar 

  24. Lai, M., Li, S., Liu, L.Y., Wang, H.: Two results on the schatten \(p\)-quasi-norm minimization for low-rank matrix recovery. Preprint (2012)

  25. Lai, M., Wang, J.: An unconstrained \(l_q\) minimization with \(0 < q \le 1\) for sparse solution of underdetermined linear systems. SIAM J. Optim. 21, 82–101 (2010)

    Article  Google Scholar 

  26. Lai, M., Xu, Y., Yin, W.: Improved iteratively reweighted least squares for unconstrained smoothed \(l_q\) minimization. SIAM J. Numer. Anal. 5(2), 927–957 (2013)

    Article  MATH  Google Scholar 

  27. Lewis, A.S., Sendov, H.S.: Nonsmooth analysis of singular values. Part I: theory. Set Valued Anal. 13, 213–241 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  28. Lewis, A.S., Sendov, H.S.: Nonsmooth analysis of singular values. Part II: applications. Set Valued Anal. 13, 243–264 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  29. Lu, Z.: Iterative reweighted minimization methods for \(l_p\) regularized unconstrained nonlinear programming. Math. Program. 147(1–2), 277–307 (2014)

    Article  MATH  MathSciNet  Google Scholar 

  30. Lu, Z., Zhang, Y., Li, X.: Penalty decomposition methods for rank minimization. Optim. Method Softw. 30(3), 531–558 (2015)

    Article  MATH  MathSciNet  Google Scholar 

  31. Ma, S., Goldfarb, D., Chen, L.: Fixed point and Bregman iterative methods for matrix rank minimization. Math. Program. 128(1), 321–353 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  32. Malek-Mohammadi, M., Babaie-Zadeh, M., Skoglund, M.: Performance guarantees for Schatten-\(p\) quasi-norm minimization in recovery of low-rank matrices. Sig. Process. 114, 225–230 (2015)

    Article  Google Scholar 

  33. Mohan, K., Fazel, M.: Iterative reweighted least squares for matrix rank minimization. In: 48th Annual Allerton Conference on Communication, Control, and Computing, pp. 653–661 (2010)

  34. Mrita, T., Kanade, T.: A sequential factorization method for recovering shape and motion from image streams. IEEE T. Pattern Anal. 19, 858–867 (1997)

    Article  Google Scholar 

  35. Mourad, N., Reilly, J.P.: Minimizing nonconvex functions for sparse vector reconstruction. IEEE Trans. Signal Process. 58, 3485–3496 (2010)

    Article  MathSciNet  Google Scholar 

  36. Rao, B.D., Kreutz-Delgado, K.: An affine scaling methodology for best basis selection. IEEE Trans. Signal Process. 47, 187–200 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  37. Recht, B., Fazel, M., Parrilo, P.: Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. 52(3), 471–501 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  38. Sun, Q.: Recovery of sparsest signals via \(l_q\) minimization. Appl. Comput. Harmon. Anal. 32, 329–341 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  39. Toh, K., Yun, S.: An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems. Pac. J. Optim. 6, 615–640 (2010)

    MATH  MathSciNet  Google Scholar 

  40. Tpmasi, C., Kanade, T.: Shape and motion from image streams under orthography: a factorization method. Int. J. Comput. Vis. 9, 137–154 (1992)

    Article  Google Scholar 

  41. Wang, Z., Lai, M.-J., Lu, Z., Fan, W., Davulcu, H., Ye, J.: Orthogonal rank-one matrix pursuit for low rank matrix completion. SIAM J. Sci. Comput. 37, A488–A514 (2015)

    Article  MATH  MathSciNet  Google Scholar 

  42. Wen, Z., Yin, W., Zhang, Y.: Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Math. Prog. Comput. 4, 333–361 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  43. Wright, S.J., Nowak, R., Figueiredo, M.: Sparse reconstruction by separable approximation. IEEE T. Image Process. 57, 2479–2493 (2009)

    Article  MathSciNet  Google Scholar 

  44. Xu, Z., Zhang, H., Wang, Y., Chang, X.: \(L_{1/2}\) regularizer. Sci. China 52, 1–9 (2009)

    Google Scholar 

  45. Yue, M., So, A.M.: A perturbation inequality for the schatten-\(p\) quasi-norm and its applications to low-rank matrix recovery. Appl. Comput. Harmon. Anal. 40, 396–416 (2016)

  46. Zhang, C.H.: Nearly unbiased variable selection under minimax concave penalty. Ann. Stat. 38(2), 894–942 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  47. Zhang, T.: Analysis of multi-stage convex relaxation for sparse regularization. J. Mach. Learn. Res. 11, 1081–1107 (2010)

    MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhaosong Lu.

Additional information

Zhaosong Lu was supported in part by NSERC Discovery Grant. Jian Lu was supported in part by the National Natural Science Foundation of China under Grants 61373087, 61472257, by the Natural Science Foundation of Guangdong, China under Grants 2015A030313550, 2015A030313557.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lu, Z., Zhang, Y. & Lu, J. \(\ell _p\) Regularized low-rank approximation via iterative reweighted singular value minimization. Comput Optim Appl 68, 619–642 (2017). https://doi.org/10.1007/s10589-017-9933-6

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-017-9933-6

Keywords

Mathematics Subject Classification

Navigation